I recently switched from three.js revision 71 to revision 84.
When using the THREE.PointCloud it was very easy to update (adding and removing) points from the scene like this:
function updatePoints(newData) {
geometry.dispose();
geometry.vertices = [];
geometry.vertices.push(...);
scene.add(newPoints);
render();
}
Now in revision 84 THREE.PointCloud is replaced by THREE.Points and this pattern doesn't work anymore and I'm clueless why this is.
My actual code works perfectly fine in r71 but in r84 only some points get removed. The raycaster does not work on the points that should be removed neither can they be animated but they do not disappear from the scene.
I tried multiple things by adding scene.remove(oldPoints);and geometry.verticesNeedUpdate = true;to the function as well as adding different setTimeout before rendering and adding the points to the scene. None of this worked.
Any help would be highly appreciated.
Thank you,
k
Since you already need to recreate the vertices, it's not that much more work to recreate the whole cloud:
geometry.dispose();
geometry = new THREE.Geometry();
geometry.vertices.push(...);
scene.remove(pointCloud);
pointCloud = new THREE.Points(geometry, material);
scene.add(pointCloud);
https://codepen.io/Sphinxxxx/pen/zwqvmP
Related
I would like to build a parallax effect from a 2D image using a depth map, similar to this, or this but using three.js.
Question is, where should I start with? Using just a PlaneGeometry with a MeshStandardMaterial renders my 2D image without parallax occlusion. Once I add my depth map as displacementMap property I can see some sort of displacement, but it is very low-res. (Maybe, since displacement maps are not meant to be used for this?)
My first attempt
import * as THREE from "three";
import image from "./Resources/Images/image.jpg";
import depth from "./Resources/Images/depth.jpg";
[...]
const geometry = new THREE.PlaneGeometry(200, 200, 10, 10);
const material = new THREE.MeshStandardMaterial();
const spriteMap = new THREE.TextureLoader().load(image);
const depthMap = new THREE.TextureLoader().load(depth);
material.map = spriteMap;
material.displacementMap = depthMap;
material.displacementScale = 20;
const plane = new THREE.Mesh(geometry, material);
Or should I use a Sprite object, which face always points to the camera? But how to apply the depth map to it then?
I've set up a codesandbox with what I've got so far. It also contains event listener for mouse movement and rotates the camera on movement as it is work in progress.
Update 1
So I figured out, that I seem to need a custom ShaderMaterial for this. After looking at pixijs's implementation I've found out, that it is based on a custom shader.
Since I have access to the source, all I need to do is rewrite it to be compatible with threejs. But the big question is: HOW
Would be awesome if someone could point me into the right direction, thanks!
Introduction:
I render an isometric map with Three.JS (v95, WebGL Renderer). The map includes many different graphic tilesets. I get the specific tile via a TextureAtlasLoader and it’s position from a JSON. It looks like this:
The problem is that it performs really slow the more tiles I render (I need to render about 120’000 tiles on one map). I can barely move the camera then. I know there are several better approaches than adding every single tile as sprite to the scene. But I’m stuck somehow.
Current extract from the code to create the tiles (it’s in a loop):
var ts_tile = Map.Imagesets[ims].Map.getTexture((bg_left / tw), (bg_top / th));
var material = new THREE.SpriteMaterial({ map: ts_tile, color: 0xffffff, fog: false });
var sprite = new THREE.Sprite(material);
sprite.position.set(pos_left, -top, 0);
sprite.scale.set(tw, th, 1);
scene.add(sprite)
I also tried to render it as a Mesh, which also works, but the performance is the same (of course):
var material = new THREE.MeshBasicMaterial({ map: ts_tile, color: 0xffffff, transparent: true, depthWrite: false });
var geo = new THREE.PlaneGeometry(1, 1, 1);
var sprite = new THREE.Mesh(new THREE.BufferGeometry().fromGeometry(geo), material);
possible solutions in the web:
I know that I can’t add so many sprites or meshes to a scene and I have tried different things and looked at examples, where it works flawless, but I can’t adapt their approaches to my code. Every tile on my map has a different texture and has it’s own position.
There is an example in the official three.js docs: They work with PointsMaterial and Points. In the end they only add 5 Points to the scene, which includes about 10000 “vertices / Images”. docs: https://threejs.org/examples/#webgl_points_sprites
Another approach can be found here on github: https://github.com/YaleDHLab/pix-plot
They create 5 meshes, every mesh includes around 4096 “tiles”, which they build up with Faces, Vertices, etc.
Final question:
My question is, how can I render my map more performant? I’m simply overchallenged by changing my code into one of the possible solutions.
I think Sergiu Paraschiv is on the right track. Try to split your rendering into chunks. This strategy and others are outlined here: Tilemap Performance. Depending on how dynamic your terrain is, these chunks could be bigger or smaller. This way you only have to re-render chunks that have changed. Assuming your terrain doesn't change, you can render the whole terrain to a texture and then you only have to render a single texture per frame, rather than a huge array of them. Take a look at this tutorial on rendering to a texture, it should give you an idea on where to start with rendering your chunks.
It looks like something broke with r70+ regarding z-depth of sprites.
Here is a jsfiddle that works perfect with r69.
Here is the same jsfiddle except using r71.
You can see that now when the scene rotates, the depths of the sprites are not always shown correctly. Half the time they are rotated into view with wrong z-depths.
Is this a bug or is something new I need to add that I missed?
I've tried all variations of common commands below and nothing seems to work all around like it used to.
var shaderMaterial = new THREE.ShaderMaterial({
...
depthTest: false,
depthWrite: false,
transparent: true
});
particleSystem.sortParticles = true;
I'm aware of the new renderDepth, but that solution seems to be unrelated and doesn't explain why it would break previous behaviour. We don't need to continually update renderDepths manually for all camera angles now do we?
PointCloud.sortParticles was removed in three.js r70; see this commit.
In your original example (without transparency), you can get your desired behavior by enabling the depth test for your material:
var shaderMaterial = new THREE.ShaderMaterial({
...
depthTest: true
});
In your updated example (with transparency), it's necessary to sort the particles yourself in three.js r70.
Note that three.js still handles z-sorting when rendering THREE.Sprite objects. That could be worth investigating.
Is it possible to merge vertices only at render time? I'm doing a series of morphs which requires the vertex list to stay the same, however I want to merge the vertices to get a smooth reflection on a cube camera. Any one aware of a command similar to unmerge vertices?
Have you tried doing it? It should work.
You'll need to call
geometry.verticesNeedUpdate()
geometry.elementsNeedUpdate()
to tell three.js that the vertices and faces, respectively, have changed. There are other update functions you may need to call too (for instance if normals have changed). More details here: https://github.com/mrdoob/three.js/wiki/Updates
Note the comment on that page that the total number of vertices can't change. This may require you to do the merge on a temp geometry and then copy the vertices to your rendered geometry.
Alright, this is not in the documentation section, but you need to use the explode modifier as demonstrated in this example: http://threejs.org/examples/#webgl_geometry_tessellation
var explodeModifier = new THREE.ExplodeModifier();
explodeModifier.modify( geometry );
geometry.computeFaceNormals();
geometry.computeVertexNormals();
//This will undo the geometry.mergeVertices();
I'm using THREE API in order to realize some animations in my app. Now i have a real problem : i'd like making spherical rotation around a specific point. The "rotate" method included in mesh objects allow me to make them, but the center of the rotation is (by default i guess) the center of the mesh.
Then, i only rotate my objects around themself...
I have already found some examples, but they don't solve my problem. I tried to create objects 3D parents like groups, and tried to make the rotation around this groups after having translated them, but this still does not work...
Can you please give me a hand about that ?
I'm so sorry, i found my problem... Making a jsfiddle made me realize i forgot to instanciate my parent as " a new Object 3D() ", that was why i didn't see any objects in my scene when i used animation on my parent... i give you a short part of my code anyway dedicated to interested people finding any help :
// mesh
mesh = new THREE.Mesh( geometry, material );
parent = new THREE.Object3D();
parent.add(mesh);
// if i want my rotation point situated at (300;0;0)
parent.position.set(300,0,0);
mesh.position.set(-300, 0, 0);
scene.add(parent);
http://jsfiddle.net/KqTg8/6/
Thank you