Introduction:
I render an isometric map with Three.JS (v95, WebGL Renderer). The map includes many different graphic tilesets. I get the specific tile via a TextureAtlasLoader and it’s position from a JSON. It looks like this:
The problem is that it performs really slow the more tiles I render (I need to render about 120’000 tiles on one map). I can barely move the camera then. I know there are several better approaches than adding every single tile as sprite to the scene. But I’m stuck somehow.
Current extract from the code to create the tiles (it’s in a loop):
var ts_tile = Map.Imagesets[ims].Map.getTexture((bg_left / tw), (bg_top / th));
var material = new THREE.SpriteMaterial({ map: ts_tile, color: 0xffffff, fog: false });
var sprite = new THREE.Sprite(material);
sprite.position.set(pos_left, -top, 0);
sprite.scale.set(tw, th, 1);
scene.add(sprite)
I also tried to render it as a Mesh, which also works, but the performance is the same (of course):
var material = new THREE.MeshBasicMaterial({ map: ts_tile, color: 0xffffff, transparent: true, depthWrite: false });
var geo = new THREE.PlaneGeometry(1, 1, 1);
var sprite = new THREE.Mesh(new THREE.BufferGeometry().fromGeometry(geo), material);
possible solutions in the web:
I know that I can’t add so many sprites or meshes to a scene and I have tried different things and looked at examples, where it works flawless, but I can’t adapt their approaches to my code. Every tile on my map has a different texture and has it’s own position.
There is an example in the official three.js docs: They work with PointsMaterial and Points. In the end they only add 5 Points to the scene, which includes about 10000 “vertices / Images”. docs: https://threejs.org/examples/#webgl_points_sprites
Another approach can be found here on github: https://github.com/YaleDHLab/pix-plot
They create 5 meshes, every mesh includes around 4096 “tiles”, which they build up with Faces, Vertices, etc.
Final question:
My question is, how can I render my map more performant? I’m simply overchallenged by changing my code into one of the possible solutions.
I think Sergiu Paraschiv is on the right track. Try to split your rendering into chunks. This strategy and others are outlined here: Tilemap Performance. Depending on how dynamic your terrain is, these chunks could be bigger or smaller. This way you only have to re-render chunks that have changed. Assuming your terrain doesn't change, you can render the whole terrain to a texture and then you only have to render a single texture per frame, rather than a huge array of them. Take a look at this tutorial on rendering to a texture, it should give you an idea on where to start with rendering your chunks.
Related
I would like to build a parallax effect from a 2D image using a depth map, similar to this, or this but using three.js.
Question is, where should I start with? Using just a PlaneGeometry with a MeshStandardMaterial renders my 2D image without parallax occlusion. Once I add my depth map as displacementMap property I can see some sort of displacement, but it is very low-res. (Maybe, since displacement maps are not meant to be used for this?)
My first attempt
import * as THREE from "three";
import image from "./Resources/Images/image.jpg";
import depth from "./Resources/Images/depth.jpg";
[...]
const geometry = new THREE.PlaneGeometry(200, 200, 10, 10);
const material = new THREE.MeshStandardMaterial();
const spriteMap = new THREE.TextureLoader().load(image);
const depthMap = new THREE.TextureLoader().load(depth);
material.map = spriteMap;
material.displacementMap = depthMap;
material.displacementScale = 20;
const plane = new THREE.Mesh(geometry, material);
Or should I use a Sprite object, which face always points to the camera? But how to apply the depth map to it then?
I've set up a codesandbox with what I've got so far. It also contains event listener for mouse movement and rotates the camera on movement as it is work in progress.
Update 1
So I figured out, that I seem to need a custom ShaderMaterial for this. After looking at pixijs's implementation I've found out, that it is based on a custom shader.
Since I have access to the source, all I need to do is rewrite it to be compatible with threejs. But the big question is: HOW
Would be awesome if someone could point me into the right direction, thanks!
How can I fix the reflection around the cycles?
I got some weird results on my object. I hope the pictures show the problem.
Three.js result
Expected result
loader.load( "model.js", function(geometry){
geometry.computeVertexNormals();
var mesh = new THREE.Mesh( geometry, new THREE.MeshPhongMaterial( {
color: 0xffffff,
specular: 0xffffff,
shininess: 65,
metal: true,
envMap: cubeCamera.renderTarget
}));
scene.add(mesh);
});
I have exported the model with the three.js json exporter from blender. The model has vertices, faces and UVs.
Hmm, this is hard to explain, but basically it is because you are using triangle polygons and your topology is not good.
It is often referred to as "pinching" if I recall correctly.
When modelling for Three.js, it is basically modelling for a game engine and all the same rules need to be followed.
Things you could try to get a better reflection:
Reduce the amount of polygons as it seems you have WAY more polygons than are needed.
Try for a cleaner topology using quads and minimising tris.
Set up smoothing groups
An easy fix would be to use a simple plane in the correct shape with holes cut out as it seems the sides are not really visible anyway. (I am talking about the large flat piece specifically here). I suggest this because I have found that reflections when using MeshPhongMaterial are often not adversely affected by the use of ngons and tris when all the vertices are flat and in the same smoothing group.
It looks like something broke with r70+ regarding z-depth of sprites.
Here is a jsfiddle that works perfect with r69.
Here is the same jsfiddle except using r71.
You can see that now when the scene rotates, the depths of the sprites are not always shown correctly. Half the time they are rotated into view with wrong z-depths.
Is this a bug or is something new I need to add that I missed?
I've tried all variations of common commands below and nothing seems to work all around like it used to.
var shaderMaterial = new THREE.ShaderMaterial({
...
depthTest: false,
depthWrite: false,
transparent: true
});
particleSystem.sortParticles = true;
I'm aware of the new renderDepth, but that solution seems to be unrelated and doesn't explain why it would break previous behaviour. We don't need to continually update renderDepths manually for all camera angles now do we?
PointCloud.sortParticles was removed in three.js r70; see this commit.
In your original example (without transparency), you can get your desired behavior by enabling the depth test for your material:
var shaderMaterial = new THREE.ShaderMaterial({
...
depthTest: true
});
In your updated example (with transparency), it's necessary to sort the particles yourself in three.js r70.
Note that three.js still handles z-sorting when rendering THREE.Sprite objects. That could be worth investigating.
I am trying to replicate this effect: http://www.hys-inc.jp/ The only difference is that I want the particles to be positioned in such a way, that they resemble the Earth - a 'textured face' if you will.
Browsing through their code, this is what they use to set up the particles group:
var map = THREE.ImageUtils.loadTexture('/admin/wp-content/themes/hys/assets/img/particle.png');
var m = new THREE.ParticleSystemMaterial({
color: 0x000000,
size: 1.5,
map: map,
//blending: THREE.AdditiveBlending,
depthTest: false,
transparent: true
});
var p = new THREE.ParticleSystem(g, m);
scene.add(p);
This is all great, but how do I position them along a sphere to resemble the planet? I know how to do it in 2d rendering context, using a picture and pixels scanning to get the right coordinates for the particles' position, but I am clueless how to do it in 3d...
Any help is more then welcome
If you have a grid of pixels that represents color values showing the surface of the earth in 2 dimensions as particles, to project it to 3 dimensions requires a sphere projection method. You can take a look at this question for the implementation of mercator projection.
how map 2d grid points (x,y) onto sphere as 3d points (x,y,z)
There are many methods for accomplishing this with a good deal of variation. Stereographic is another common approach : http://en.wikipedia.org/wiki/List_of_map_projections
I'm trying to implement the code from this tutorial, but in much greater proportions (radius = 100000 units).
I don't know if the size matters but on my earth render the clouds have a strange render.
As the tutorial does, I'm using two spheres and three textures (earth map, bump map, clouds).
Here the result (that's worse if the clouds are closer):
More the clouds are closer of the planet surface, more this glitch is visible. If the clouds are sufficiently far (but that's not realistic) the problem disappears completely.
What can I do?
Use logarithmic depth buffer instead of the linear one. This is a very simple change, just enable logarithmicDepthBuffer when you create your THREE.WebGLRenderer like so:
var renderer = new THREE.WebGLRenderer({ antialias: true, logarithmicDepthBuffer: true});
Here's an example you can have a look at:
http://threejs.org/examples/#webgl_camera_logarithmicdepthbuffer
Using polygonOffset as suggested by LJ_1102 is a possibility, but it shouldn't be necessary.
What you're experiencing is z-fighting due to insufficient depth buffer resolution.
You basically have three options to counteract this:
Write / use a multi-texture shader that renders all three textures on one sphere.
Increase the distance between the sphere faces. / Decrease the distance between your near and far clipping planes.
Use polygonOffset and the POLYGON_OFFSET_FILL renderstate to offset depth values written by your outer sphere. Read more about polygonOffset here.