Awful frame rates under all browsers (Pixi JS Rendering) - javascript

I'm drawing an isometric grid with mouse-over hit area detection over each tile. It works but the frame rates are awful. Any idea what is bringing the frame-rates down so much? Surely, webGL is capable of attaining better FPS than this?
There are no moving sprites (only diamond-shapes PIXI.Graphics) drawn to the screen.
http://178.79.155.146/pc
Cheers,
Jordan

As a follow-up. With primitives we have to draw them to the stencil buffer each frame, then render that to the render buffer. With a sprite it is a single draw call to show the texture. If you have many sprites with the same texture we can batch them all together in a single draw call (we can't do that with Graphics).
Summary: For performance, use sprites (not WebGL primitive graphics!).

Related

Texture cache overflow for WebGL HTML5 game

I am creating an HTML5 web adventure game and making tilemaps with Tiled.
Even with Texture Packer, I seem to be exceeding max cache of texture units as I'm getting error
Texture cache overflow: 16 texture units available
WebGL Stats shows the limit is 16 for ~70% of devices. My browser, as shown here, supports 16 texture units:
In game, I opened Chrome console to check WebGL specs:
WebGL2RenderingContext.MAX_TEXTURE_IMAGE_UNITS = 34930
WebGL2RenderingContext.MAX_VERTEX_TEXTURE_IMAGE_UNITS = 35660
WebGL2RenderingContext.MAX_COMBINED_TEXTURE_IMAGE_UNITS = 35661
This is a bit confusing as this article shows output should be more in the 0-10 range, not 30,000 range:
maxTextureUnits = 8
maxVertexShaderTextureUnits = 4
maxFragmentShaderTextureUnits = 8
My question(s):
How can I determine which images in my packed texture atlas are causing the issues? I.e., how can I check the total textures?
Is it possible to force a higher cache limit?
The way to check those values is
const maxFragmentShaderTextureUnits = gl.getParameter(gl.MAX_TEXTURE_IMAGE_UNITS);
const maxVertexShaderTextureUnits = gl.getParameter(gl.MAX_VERTEX_TEXTURE_IMAGE_UNITS);
const maxTextureUnits = gl.getParameter(gl.MAX_COMBINED_TEXTURE_IMAGE_UNITS);
Further, those values have to do with how many textures you can access in a single shader not how many textures you can have in total.
They also have nothing to do with a "cache"
In your case you probably want to combine your textures into a single texture atlas (one texture that contains all your tiles).
Here's some code that does that.
It loads a tiled json file, then loads all the referenced images, it then creates a 2D canvas and copies the tiles from each image into the canvas, remapping the tiles in the maps to match. When it's finished it uses the canvas as the source of the tile texture. Normally I'd do this offline but it was nice to just be able to hit "reload" to see a new map that I left it at runtime.
In that same library is a shader that draws tilemaps including flipped and rotated tiles. In other words, to draw a tiled map it's one draw call per layer and only 2 textures are used. One texture holds the tile images (the texture created above). Another texture holds a layer of a tiled map. The shader reads the tiled map texture and uses that to draw the correct tile from the tile image texture. You can see an explanation of this technique in this article
BTW: The library with the tiled loader also has a shader that can selectively adjust the hue of a sprite. The library was used with a few games, for example this game
How can I determine which images in my packed texture atlas are causing the issues? I.e., how can I check the total textures?
You manage the textures, not WebGL, so if you want to know how many you're using add some code to count them.
Is it possible to force a higher cache limit?
No, but like I said above this has nothing to do with any cache.
My guess is you're using some library or your own code is generating a shader and that you're adding more and more textures to it and the shader generator therefore generating a shader that uses too many textures. The question is why are you using so many textures in the same draw. No 2D game I know of uses more then 2 to 6 textures at in one draw call. The game might use 10000 textures but to draw a single sprite or a layer of tilemap it only needs 1 or 2 textures.
To put it another way. A typical game would do
for each layer of tilemap
bind texture atlas for layer (assming it's different than other layers)
draw layer
for each sprite
bind texture for sprite
draw sprite
In the example above, even if you had 10000 textures only 1 texture is ever in use at a time so you're hitting no limits.

Profiling Threejs app

I have a webgl application, I've written using threejs. But the FPS is not good enough on some of my test machines. I've tried to profile my application using Chrome's about:tracing with the help from this article : http://www.html5rocks.com/en/tutorials/games/abouttracing/‎
It appears that the gpu is being overloaded. I also found out that my FPS falls drastically when I have my entire scene in the camera's view. The scene contains about 17 meshes and a single directional light source. Its not really a heavy scene. I've seen much heavier scenes get render flawlessly on the same GPU.
So, what changes can I make in the scene to make it less heavy, without completely changing it? I've already tried removing the textures? But that doesn't seem to fix the problem.
Is there a way to figure out what computation threejs is pushing on to the GPU? Or would this be breaking the basic abstraction threejs gives?
What are general tips for profiling GPU webgl-threejs apps?
There are various things to try.
Are you draw bound?
Change your canvas to 1x1 pixel big. Does your framerate go way up? If so you're drawing too many pixels or your fragment shaders are too complex.
To see if simplifying your fragment shader would help use a simpler shader. I don't know three.js that well. Maybe the Basic Material?
Do you have shadows? Turn them off. Does it go faster? Can you use simpler shadows? For example the shadows in this sample are fake. They are just planes with a circle texture.
Are you using any postprocessing effects? Post processing effects are expensive, specially on mobile GPUs.
Are you drawing lots of opaque stuff? If so can you sort your drawing order so you draw front to back (close to far). Not sure if three.js has an option to do this or not. I know it can sort transparent stuff back to front so it should be simple to reverse the test. This will make rendering go quicker assuming you're drawing with the depth test on because pixels in the back will be rejected by the DEPTH_TEST and so won't have the fragment shader run for them.
Another thing you can do to save bandwidth is draw to a smaller canvas and have it be stretched using CSS to cover the area you want it to appear. Lots of games do this.
Are you geometry bound?
You say you're only drawing 17 meshes but how big are those meshes? 17 12 triangle cubes or 17 one million triangle meshes?
If you're geometry bound can use simplify? If the geometry goes far into the distance can you split it and use lods? see lod sample.

SVG vs CANVAS (Snap.svg vs FabricJS)

I made a speedtest to compare Snap.svg (SVG) to FabricJS (CANVAS):
http://jsbin.com/tadec/7 function dummy().
In Chrome SVG renders in 120 ms, while CANVAS renders in 1100 ms. SVG is 9x faster than CANVAS.
Fabricjs.com page says in this example that Raphael takes 225 ms and Fabric takes 97 ms (parsing: 80, rendering: 17).
I have had an impression (after reading fabricjs.com and paperjs.org) that FabricJS and more generally Canvas is faster than SVG.
My speed test claims that SVG is significantly faster than Canvas (at least Snap.svg seems to be significantly faster than FabricJS).
Why FabricJS is so slow in my test? Have I made some mistake in comparison, because I'm surprised that SVG seems to be faster than Canvas in my speed test.
EDIT: My question is two-parted: Why rendering speed is so much slower in FabricJS and why dragging speed as well?
Your benchmark is broken in my opinion, because beside measuring drawing to canvas you are measuring parsing of a huge string containing a path, over and over again. Separate this code out of the loop and you should get more reliable results.
Measurements that are provided for canvas libraries are provided for drawing, not for parsing or other pre-processing work. If you use canvas like you use SVG, then yes, it will be slower. It is not intended to be used like SVG. FabricJS provides a way to do that, but it is not optimal. One solution would be to parse path once, and then use same path over and over instead of parsing it every time.
Also, measurements are given probably for drawing a canvas, not for interaction with parts. As you say in comments, rendering may be improved, but why does dragging a shape take so much longer? Because:
maybe path is being reparsed on each redraw (not sure how FabricJS works)
because SVG can redraw only certain parts of image that you are moving, and canvas is usually redrawn completely. Why? Because you can't "erase" part of canvas where a shape used to be. So entire canvas is erased, and new positions are redrawn.
Why do then people say canvas is faster than SVG for such scenarios? Because it is if you use it properly. It will be more work, but it will work much faster.
Don't use SVG paths for drawing shapes, or use simple paths and cache them
Cache shapes which you use often into off-screen (hidden canvas) and then copy from that canvas onto visible canvas when needed
If you have multiple independant layers of an image (for example 3 layers in a game, if you have background sky which is moving, background mountains which are moving slower and a character), use multiple canvases. Put canvases one over another, draw sky on the bottom canvas, draw mountains on second canvas and draw character on top canvas. That way, when character on top canvas moves, you don't have to redraw entire background.
I hope my answer is useful for you :)

What is more efficient? Pre-rendering or mirroring directly?

In my game I have characters that have sprites that are drawn onto a off-screen canvas upon creation of the character. Now sometimes the characters have to face a different direction that usual, so I mirror them. My question is now:
what is more efficient:
Also pre-render the mirrored sprites so I can simply draw them onto
the canvas
Draw the sprites on a buffer and mirror them on it, then draw them
onto my game's canvas
Render them mirrored directly onto the canvas(for some reason always off by some pixels, prolly cause of rounding)

Background with three.js

Can anybody help me with three.js?
I need to draw background, something, like a THREE.Sprite, but it neet to be UNDER any 3d object, that will draw later. I have a camera, that can be move only on Z axis.
I tryed to use:
cube mapping shader - PROBLEM: artefacts with shadow planes, it's unstable draw
THREE.Sprite that dublicate camera moving - PROBLEM: artefacts with shadow plane - it have a edge highlighting OR drawing only other spirtes without objects.
HTML DOM Background - PROBLEM: big and ugly aliasing in models.
What can I try more? Thanks!
You could maybe try drawing in several passes, i.e. making a first render of the background scene to a buffer, and then a second one over the first "buffer". Maybe using the buffer as background (painting it in 2D with an orthographic projection, and disabling depth buffer writes in that pass).
I haven't tried it myself with three.js, but that's how I'd do that with "traditional" OpenGL.
If you want a "3d" background i.e. something that will follow the rotation of your camera, but not react to the movement (be infinitely far), then the only way to do it is with a cubemap.
The other solution is a environment dome - a fully 3d object.
If you want a static background, then you should be able todo just a html background, i'm not sure why this would fail and what 'aliasing in models' you are talking about.

Categories