I am creating an HTML5 web adventure game and making tilemaps with Tiled.
Even with Texture Packer, I seem to be exceeding max cache of texture units as I'm getting error
Texture cache overflow: 16 texture units available
WebGL Stats shows the limit is 16 for ~70% of devices. My browser, as shown here, supports 16 texture units:
In game, I opened Chrome console to check WebGL specs:
WebGL2RenderingContext.MAX_TEXTURE_IMAGE_UNITS = 34930
WebGL2RenderingContext.MAX_VERTEX_TEXTURE_IMAGE_UNITS = 35660
WebGL2RenderingContext.MAX_COMBINED_TEXTURE_IMAGE_UNITS = 35661
This is a bit confusing as this article shows output should be more in the 0-10 range, not 30,000 range:
maxTextureUnits = 8
maxVertexShaderTextureUnits = 4
maxFragmentShaderTextureUnits = 8
My question(s):
How can I determine which images in my packed texture atlas are causing the issues? I.e., how can I check the total textures?
Is it possible to force a higher cache limit?
The way to check those values is
const maxFragmentShaderTextureUnits = gl.getParameter(gl.MAX_TEXTURE_IMAGE_UNITS);
const maxVertexShaderTextureUnits = gl.getParameter(gl.MAX_VERTEX_TEXTURE_IMAGE_UNITS);
const maxTextureUnits = gl.getParameter(gl.MAX_COMBINED_TEXTURE_IMAGE_UNITS);
Further, those values have to do with how many textures you can access in a single shader not how many textures you can have in total.
They also have nothing to do with a "cache"
In your case you probably want to combine your textures into a single texture atlas (one texture that contains all your tiles).
Here's some code that does that.
It loads a tiled json file, then loads all the referenced images, it then creates a 2D canvas and copies the tiles from each image into the canvas, remapping the tiles in the maps to match. When it's finished it uses the canvas as the source of the tile texture. Normally I'd do this offline but it was nice to just be able to hit "reload" to see a new map that I left it at runtime.
In that same library is a shader that draws tilemaps including flipped and rotated tiles. In other words, to draw a tiled map it's one draw call per layer and only 2 textures are used. One texture holds the tile images (the texture created above). Another texture holds a layer of a tiled map. The shader reads the tiled map texture and uses that to draw the correct tile from the tile image texture. You can see an explanation of this technique in this article
BTW: The library with the tiled loader also has a shader that can selectively adjust the hue of a sprite. The library was used with a few games, for example this game
How can I determine which images in my packed texture atlas are causing the issues? I.e., how can I check the total textures?
You manage the textures, not WebGL, so if you want to know how many you're using add some code to count them.
Is it possible to force a higher cache limit?
No, but like I said above this has nothing to do with any cache.
My guess is you're using some library or your own code is generating a shader and that you're adding more and more textures to it and the shader generator therefore generating a shader that uses too many textures. The question is why are you using so many textures in the same draw. No 2D game I know of uses more then 2 to 6 textures at in one draw call. The game might use 10000 textures but to draw a single sprite or a layer of tilemap it only needs 1 or 2 textures.
To put it another way. A typical game would do
for each layer of tilemap
bind texture atlas for layer (assming it's different than other layers)
draw layer
for each sprite
bind texture for sprite
draw sprite
In the example above, even if you had 10000 textures only 1 texture is ever in use at a time so you're hitting no limits.
Related
I have a trouble with displaying raster data on leaflet maps.
There is an float NxM array and RGB scale. I want to add new layer with the colorful tiles. I tried just to draw rectangles, but it's incredibly slow displaying. I noticed method L.GridLayer.extend(), but I didn't find any examples of what I want (just simple grid with text of coords on each tile).
Can somebody give an example where the raster data displayed by this or any other method?
If you look at the list of Leaflet plugins, you'll see quite a few that do per-pixel raster manipulation, including:
L.TileLayer.BPG: extends tilelayer, loading a tile means rendering a <canvas> and dumping its contents into the <img>
L.TileLayer.PixelFilter: loads an image and replaces individual pixels
Leaflet-fractal: Displays the mandelbrot set, calculating each pixel of a <canvas>
L.TileLayer.GL: Manipulate images with WebGL. Very useful and fast for heavy computations (fractal sets are several orders of magnitude faster) or manipulating existing images. Do have a look at the hypsometric tint demo; it will be useful if your NxM array is in any kind of graphical format (as the "terrain-rgb" tiles).
I'm trying to load with three.js the same image in a large number (~ 1000) of bidimensional shapes but with different offsets in every shape.
I've taken this demo from the official website and customized it into this other demo, with all my shapes and a random background texture.
The problem is that if I clone the texture once per shape the page eats a lot of RAM and it ends up crashing.
You can see this in action by going in the javascript and changing the comments in the addShape function (you'll find the instructions in the code).
I've done some research and found some results, like this open issue or this older question where it's recommended to clone the texture; anyway nothing seems to work in my example.
Am I doing something wrong? It's changed something since these last posts about this problem?
Maybe I´m misunderstanding the problem, but why don´t you change the UV coordinates of the individual shapes to align the texture and use just one texture?
From documentation:
Geometry.faceVertexUvs
Array of face UV layers, used for mapping textures onto the geometry.
Each UV layer is an array of UVs matching the order and number of
vertices in faces.
To signal an update in this array, Geometry.uvsNeedUpdate needs to be
set to true.
So I found out that texturing planets can be really hard. I created a 4096k image and wrapped it around a high poly sphere. Apart from the possible memory management performance issue that comes with a 3-4 mb image, the texture looks bad / pixelated on a close up (orbital) view.
I was thinking that I could maybe increase the resolution significantly by splitting up the picture. Then create a low, medium and high version of each section. If the camera viewport is very close to that particular section then render the high resolution image. If far away remove image from memory and apply low or medium version.
To be honest I am not sure what strategy to use to render high quality planets. Should I maybe avoid textures and just use height maps and color the planet with Javascript? Same thing for the clouds. Should I create a sphere with an alpha map or should I use shaders?
As you can see this is the issue im having and hopefully you could enlighten me. Performance with Webgl / three.js has significantly improved over time but since this is all done within the browser I assume thinking about the right solution is vital in the long term.
You're going to need to implement a lod system. lod = "level of detail" and in 3d it means generally means switching from high-polygon to low-polygon models but in general it means doing anything to switch high detail to low-detail
Because you can't make textures 1000000x100000 which is pretty much what you'd need to do to get the results you want you'll need build a sphere out of multiple sections and texture each one separately. How many sections depends on how close you want to be able to zoom in. Google Maps has millions of sections. At the same time, if you can zoom out to see the whole planet (like you can in Google Maps) you can't draw millions of sections. Instead you'd switch to a single sphere. That process is called "LODing"
There is no "generic" solution. You generally need to write your own for your specific case.
In the case of something like Google Maps what they most likely do is have several levels of detail. A single sphere when you can see the whole planet. A sphere made of say 100 pieces when slightly closer. A sphere made of 1000 pieces when closer, A sphere made of 10000 pieces when closer, etc etc. They also only show the pieces you can see. Deciding and managing which pieces to show with a generic solution would be way to slow (look at millions of pieces every frame) but you, as the application writer know what pieces are visible so you can make sure only those pieces are in your scene.
Another thing that people often do is fade between LODs. So when Google Maps is showing the single mesh sphere when all the say zoomed out and they transition to the 100 piece or 1000 piece sphere they crossfade between the two.
Some examples of lodding
http://acko.net/blog/making-worlds-1-of-spheres-and-cubes/
http://vterrain.org/LOD/spherical.html
You could create a sphere with different topology.
Say you create 6 square planes, arranged in such a way that they form a box. You can tesselate these planes to give the sphere enough resolution. These planes would have UV mapping that would work similar to cube-mapping, each will hold a cubemap face.
Then you loop through all the vertices, take the position vector and normalize it. This will yield a sphere.
You can convert an equirectangular panorama image into a cubemap. I think it will allow you to to get more resolution and less stretching for cheap.
For starters, the 4096 x 4096 should be 4096x2048 on the default sphere with equirectangular, but the newly mapped sphere can hold 6 x 4096 x 4096 with no stretching, and can be drawn in 6 draw calls.
Further splitting these could yield a good basis for what gman suggests.
I'm drawing an isometric grid with mouse-over hit area detection over each tile. It works but the frame rates are awful. Any idea what is bringing the frame-rates down so much? Surely, webGL is capable of attaining better FPS than this?
There are no moving sprites (only diamond-shapes PIXI.Graphics) drawn to the screen.
http://178.79.155.146/pc
Cheers,
Jordan
As a follow-up. With primitives we have to draw them to the stencil buffer each frame, then render that to the render buffer. With a sprite it is a single draw call to show the texture. If you have many sprites with the same texture we can batch them all together in a single draw call (we can't do that with Graphics).
Summary: For performance, use sprites (not WebGL primitive graphics!).
I'm building a game in HTML5. I have hundreds of images that look similar to this:
And at certain points in the game I want to draw an outline around them. Like this:
I want to do different tracing colors at different times and don't want to end up with [number of sprites] * [number of colors] of additional images for memory and bandwidth reasons so I'm looking at vector drawings.
What I need to come to a solution on are really two separate things:
Calculate a vector path for each frame in a spritesheet. This can be either dynamically or ahead of time and stored.
Draw the vector path
The engine I'm using for the game is ImpactJS. It doesn't have any support for vector operations. The author of the game engine I'm using did his own vector drawing manually by exporting the vectors from Illustrator for a particular image, then using an online converter tool to change them to HTML5 syntax. This isn't the best method for hundreds of images so I thought I'd see what information others of you have.
I would like to still draw the images using ImpactJS since this game is already pretty far along, and just do a second-pass drawing of the outline of the image when necessary.
Thank you for any help!