I have a webgl application, I've written using threejs. But the FPS is not good enough on some of my test machines. I've tried to profile my application using Chrome's about:tracing with the help from this article : http://www.html5rocks.com/en/tutorials/games/abouttracing/
It appears that the gpu is being overloaded. I also found out that my FPS falls drastically when I have my entire scene in the camera's view. The scene contains about 17 meshes and a single directional light source. Its not really a heavy scene. I've seen much heavier scenes get render flawlessly on the same GPU.
So, what changes can I make in the scene to make it less heavy, without completely changing it? I've already tried removing the textures? But that doesn't seem to fix the problem.
Is there a way to figure out what computation threejs is pushing on to the GPU? Or would this be breaking the basic abstraction threejs gives?
What are general tips for profiling GPU webgl-threejs apps?
There are various things to try.
Are you draw bound?
Change your canvas to 1x1 pixel big. Does your framerate go way up? If so you're drawing too many pixels or your fragment shaders are too complex.
To see if simplifying your fragment shader would help use a simpler shader. I don't know three.js that well. Maybe the Basic Material?
Do you have shadows? Turn them off. Does it go faster? Can you use simpler shadows? For example the shadows in this sample are fake. They are just planes with a circle texture.
Are you using any postprocessing effects? Post processing effects are expensive, specially on mobile GPUs.
Are you drawing lots of opaque stuff? If so can you sort your drawing order so you draw front to back (close to far). Not sure if three.js has an option to do this or not. I know it can sort transparent stuff back to front so it should be simple to reverse the test. This will make rendering go quicker assuming you're drawing with the depth test on because pixels in the back will be rejected by the DEPTH_TEST and so won't have the fragment shader run for them.
Another thing you can do to save bandwidth is draw to a smaller canvas and have it be stretched using CSS to cover the area you want it to appear. Lots of games do this.
Are you geometry bound?
You say you're only drawing 17 meshes but how big are those meshes? 17 12 triangle cubes or 17 one million triangle meshes?
If you're geometry bound can use simplify? If the geometry goes far into the distance can you split it and use lods? see lod sample.
Related
I am working on a system to procedurally build meshes for "mines", right now I don't want to achieve visual perfection I am more focused on the basic.
I got the point in which I am able to generate the shape of the mines and from that generating the 2 meshes, one for the ground and one for the "walls" of the mine.
Now I am working on getting the UV mapping right but my problem is that the ground is really hard to map to UV coordinates properly and I am currently not able to get it right.
For the tessellation I am using a constrained version of the delaunay triangulation to which I added a sub-tessellation what simply splits the triangles at least once and keeps splitting them if the area of the triangle is greater than X.
Here a 2D rendering of the tessellation that highlight the contours, the triangles and the edges
Here the result of the 3D rendering (using three.js and webgl) with my current UV mapping applied (a displacement map as well, please ignore it for now).
I am taking a naive approach to the UV mapping, each vertex of a triangle in the grid is translated to values between 0 and 1 and that's it.
I think that, in theory should be right, but the issue is with the order of the vertexes that is creating a problem but if that would be the case the texture should be shown rotated or oddly not just oddly AND stretched like that.
Once I will get the UV mapping right, the next step would be to correctly implement the
I am currently writing this in javascript but any hint or solution in any language would be alright, I don't mind converting and/or re-engineering it to make it work.
My goal is to be able to procedurally build the mesh, send it to multiple clients and achieve the same visual rendering. I need to add quite a few bits and pieces after this other step is implemented so I can't rely on shaders on the client side because otherwise being able to place tracks, carts or something else on the ground would just be impossible for the server.
Once I will get these things sorted out, I will switch to Unity 3D for the rendering on the client side, webgl and three.js are currently being used just to have a quick and easy way to view what's being produced without the need of a client/server whole infrastructure.
Any suggestion?
Thanks!
I sorted out the issue in my code, it was pretty stupid though: by mistake I was adding 3 UV mappings per triangle and not 1 per point causing an huge visual mess. Sorted out that, I was able to achieve what I needed!
https://www.youtube.com/watch?v=DHF4YWYG7FM
Still a lot of work to do but starts to look decent!
I'm working on a WebGL application that works similarly to video compositing programs like After Effects.
The application takes textures and stacks them like layers. This can be as simple as drawing one texture on top of the other or using common blend modes like the screen to combine several layers.
Each layer can be individually scaled/rotated/translated via an API call, and altogether the application forms a basic compositing software.
Now my question, doing all this in a single WebGL canvas is a lot to keep track of.
// Start webgl rant
The application won't know how many layers there are ahead of time, and so textures coordinate planes and shaders will need to be made dynamically.
Blending the layers together would require writing out the math for each type of blend mode. Transforming vertex coordinates requires matrix math and just doing things in WebGL, in general, requires lots of code being a low-level API.
// End webgl rant
However, this could be easily solved by making a new canvas element for each layer. Each WebGL canvas will have a texture drawn onto it, and then I can scale/move/blend the layers using simple CSS code.
At the surface, it seems like the performance hit won't be that bad, because even if I did combine everything into a single context each layer would still need its own texture and coordinate system. So the number of textures and coordinates stays the same, just spread across multiple contexts.
However deep inside I know somehow this is horribly wrong, and my computer's going to catch fire if I even try. I just can't figure out why.
With a goal of being able to support, ~4 layers at a time would using multiple canvases be a valid option? Besides worrying about browsers having a max number of active WebGL context's are there any other limitations to be aware of?
I'm rewriting a legacy visualization built several years ago with an older version of Three.js. The visualization is ~20k 2D circles (nodes from a graph) drawn on a screen with positions and coloring already determined.
There's no animation involved other than when interaction occurs (zooming, clicking, etc). The previous author used sprites for the circles (nodes) to show different states (node selected: glowing effect, node hidden: transparent, etc)
I've been able to successfully reproduce much of the visualization using CircleBufferGeometry instead of a sprite.
I know this is potentially too vague a question, given it might be too specific to my use case, but I'm wondering if anyone has any insight into whether it's more performant to render ~20k sprites or ~20k CircleBufferGeometry with Three.js / webgl.
Thanks!
CircleBufferGeometry will have many more vertices per entity than a sprite would since sprites should be drawn with gl.POINTS ( one point == one vertex ). Your vertex shader would process more vertices with the circle than it would with a sprite.
So I found out that texturing planets can be really hard. I created a 4096k image and wrapped it around a high poly sphere. Apart from the possible memory management performance issue that comes with a 3-4 mb image, the texture looks bad / pixelated on a close up (orbital) view.
I was thinking that I could maybe increase the resolution significantly by splitting up the picture. Then create a low, medium and high version of each section. If the camera viewport is very close to that particular section then render the high resolution image. If far away remove image from memory and apply low or medium version.
To be honest I am not sure what strategy to use to render high quality planets. Should I maybe avoid textures and just use height maps and color the planet with Javascript? Same thing for the clouds. Should I create a sphere with an alpha map or should I use shaders?
As you can see this is the issue im having and hopefully you could enlighten me. Performance with Webgl / three.js has significantly improved over time but since this is all done within the browser I assume thinking about the right solution is vital in the long term.
You're going to need to implement a lod system. lod = "level of detail" and in 3d it means generally means switching from high-polygon to low-polygon models but in general it means doing anything to switch high detail to low-detail
Because you can't make textures 1000000x100000 which is pretty much what you'd need to do to get the results you want you'll need build a sphere out of multiple sections and texture each one separately. How many sections depends on how close you want to be able to zoom in. Google Maps has millions of sections. At the same time, if you can zoom out to see the whole planet (like you can in Google Maps) you can't draw millions of sections. Instead you'd switch to a single sphere. That process is called "LODing"
There is no "generic" solution. You generally need to write your own for your specific case.
In the case of something like Google Maps what they most likely do is have several levels of detail. A single sphere when you can see the whole planet. A sphere made of say 100 pieces when slightly closer. A sphere made of 1000 pieces when closer, A sphere made of 10000 pieces when closer, etc etc. They also only show the pieces you can see. Deciding and managing which pieces to show with a generic solution would be way to slow (look at millions of pieces every frame) but you, as the application writer know what pieces are visible so you can make sure only those pieces are in your scene.
Another thing that people often do is fade between LODs. So when Google Maps is showing the single mesh sphere when all the say zoomed out and they transition to the 100 piece or 1000 piece sphere they crossfade between the two.
Some examples of lodding
http://acko.net/blog/making-worlds-1-of-spheres-and-cubes/
http://vterrain.org/LOD/spherical.html
You could create a sphere with different topology.
Say you create 6 square planes, arranged in such a way that they form a box. You can tesselate these planes to give the sphere enough resolution. These planes would have UV mapping that would work similar to cube-mapping, each will hold a cubemap face.
Then you loop through all the vertices, take the position vector and normalize it. This will yield a sphere.
You can convert an equirectangular panorama image into a cubemap. I think it will allow you to to get more resolution and less stretching for cheap.
For starters, the 4096 x 4096 should be 4096x2048 on the default sphere with equirectangular, but the newly mapped sphere can hold 6 x 4096 x 4096 with no stretching, and can be drawn in 6 draw calls.
Further splitting these could yield a good basis for what gman suggests.
Can anybody help me with three.js?
I need to draw background, something, like a THREE.Sprite, but it neet to be UNDER any 3d object, that will draw later. I have a camera, that can be move only on Z axis.
I tryed to use:
cube mapping shader - PROBLEM: artefacts with shadow planes, it's unstable draw
THREE.Sprite that dublicate camera moving - PROBLEM: artefacts with shadow plane - it have a edge highlighting OR drawing only other spirtes without objects.
HTML DOM Background - PROBLEM: big and ugly aliasing in models.
What can I try more? Thanks!
You could maybe try drawing in several passes, i.e. making a first render of the background scene to a buffer, and then a second one over the first "buffer". Maybe using the buffer as background (painting it in 2D with an orthographic projection, and disabling depth buffer writes in that pass).
I haven't tried it myself with three.js, but that's how I'd do that with "traditional" OpenGL.
If you want a "3d" background i.e. something that will follow the rotation of your camera, but not react to the movement (be infinitely far), then the only way to do it is with a cubemap.
The other solution is a environment dome - a fully 3d object.
If you want a static background, then you should be able todo just a html background, i'm not sure why this would fail and what 'aliasing in models' you are talking about.