I've to apply an alpha coloring filter to a point cloud like it's described on this link: https://www.mapbox.com/blog/colorize-alpha-image-filter/ to achieve a sort of heatmap.
I'm rendering a 2d point cloud on a texture and than render it into a plane using a custom shader that handle the colorize-alpha filtering.
The problem is that I don't understand how I can correctly zoom inside the texturized pointcloud but keeping the original size of the pointcloud points.
I've created a simplified example without a real colorize-alpha filtering, but with the structure of my render-to-texture: http://jsfiddle.net/q8fpt7eL/1/
The effect I want to achieve is exactly the same you can achieve when you draw directly the point cloud. On the jsfiddle you can just comment the RTT part and un-comment the render directly part to see what I'm speaking about.
//render to texture
//renderer.render(sceneRTT, cameraRTT, rtTexture, false);
//renderer.render(scene, camera);
//render directly the point cloud
renderer.render(sceneRTT, camera);
I've already tried to use the same camera, or to copy the camera position/rotation on the cameraRTT object but seems not working correctly. I've also tried with orthographic camera on RTT scene but without success.
Anyone have an idea how can I achieve my goal?
Thanks
On line 41, you are setting the OrbitControls to control the camera of the "plane scene", when you really want it to control the RTT scene. Try this:
new THREE.OrbitControls(cameraRTT, renderer.domElement);
That looks much better, you can zoom inside the point cloud.
Lastly, all you have to do is make camera orthographic and setup your plane so it fills the scene.
Related
I have began to create a personal project by using threejs and I would like someone to explain how threejs add method works when adding to the scene.
For example, from a small example, a scene is defined and a camera is created while specifying the near and far plane from the units 0.1 to 1000. That is all that is defined into the world and the mesh is added into the scene by calling scene.add(cube).
How is the cube added into the scene without any coordinates given?
On this note, does anybody have a good link to explain the coordinate systems used for threejs/opengl? Many thanks.
When it comes to dipping the toe into the pool of threejs, a few resources that I found helpful getting started were https://discoverthreejs.com/book/first-steps/first-scene/ and https://threejs.org/docs/#api/en/core/Object3D.
In the case of the latter, https://threejs.org/docs/#api/en/core/Object3D.position specifies the defaults when placing an object into the scene.
Hope this helps.
I am trying to get back and export the mesh that is being displaced by a displacementMap.
The shader is transforming vertexes according to this line (from
three.js/src/renderers/shaders/ShaderChunk/displacementmap_vertex.glsl):
transformed += normalize( objectNormal ) * ( texture2D( displacementMap, uv ).x * displacementScale + displacementBias );
This is displacing a vertex according to the displacementMap, mixed with the uv coordinates for that vertex.
I am trying to create this mesh/geometry so that I can then later export it.
I have created a "demo" of the problem here:
Github Page
I would like the displaced mesh, as seen in the viewport, up on pressing exportSTL. However I am only getting the undisplaced plane.
I understand why this happens, the displacement only happens in the shader and is not really displacing the geometry of the plane directly.
I have not found a method provided by three.js and so far have not found any way in getting the changes from the shader.
So I am trying to do it with a function in the "demo.js".
However, I am a WebGL/three.js newbie and have problems re-creating what the shader does.
I have found exporters handling morphTargets, but these are of no help.
After reading this question I tried PlaneBufferGeometry, as this is closer to the shader - but this produces the same results for me.
I think this question originally tried to produce something similar, but accepted an unrelated question.
In the end I would like to draw on a HTML-canvas which then updates the texture in real time (I have this part working). The user can then export the mesh for 3d printing.
Is there a way three.js can give me the modified geometry of the shader?
Or can someone help me translate the shader line in to a "conventional" Three.js function?
Maybe this is totally the wrong approach to get a displaced mesh?
Update - Example is working
Thanks to the example from DeeFisher I can now calculate the displacement in CPU, as originally suggested by imerso.
If you click on the Github Page now, you will get a working example.
At the moment I do not fully understand why I have to mirror the canvas to get the correct displacement in the end, but this is at worst a minor nuissance.
To do that while still using a shader for the displacement, you will need to switch to WebGL2 and use Transform-Feedback (Google search: WebGL2 Transform-Feedback).
An alternative would be to read the texture back to CPU, and scan it while displacing the vertices using CPU only (Google search: WebGL readPixels).
Both alternatives will require some effort, so no code sample at this time. =)
BABYLON.js can be used in conjunction with THREE.js and it allows you to displace the actual mesh vertices when applying displacement maps:
var sphere = BABYLON.Mesh.CreateSphere("Sphere", 64, 10, scene, true);
sphere.applyDisplacementMap(url, minHeight, maxHeight, onSuccess, uvOffset, uvScale)
See an example of the function in use here.
You can then use a for to loop transfer the BABYLON mesh data into a THREE mesh object.
I'm a newbie about 3D programming.
I'm trying to use three.js and Spine to render 2D-Characters on 3D-Space.
And I want to render Mesh as Sprite.
It means objects look at near view with parallel always not a camera's point with lookAt() function.
Spine has SkeletonMesh which is inherited by Mesh.
So It shows like 3D Objects even if it has only one face.
Is there any simple way?
or please advice mathematical method.
Thanks.
If you want an object face the camera, but look in a direction that is parallel to the look direction of the camera, you can use this pattern:
object.quaternion.copy( camera.quaternion );
You can't assign the quaternion, you must copy it every frame in the animation loop -- or at least when the camera changes its orientation.
This approach is an alternative to using
object.lookAt( camera.position );
three.js r.84
I'm making a game with three.js in which an mesh will be rendered in greater detail as the player gets closer (i.e different geometry and material). This seems like it would be a common requirement in open world type games, and I was wondering what standard procedure is in three.js
It looks like changing the geometry of a mesh is quite computationally costly, so should I just store a different mesh for each distance, or is there an existing data structure for this purpose?
You want to use level-of-detail (LOD) modeling.
There is an example of that in the following three.js example: http://threejs.org/examples/webgl_lod.html
In the example, press the 'W/S' keys to see the effect.
three.js r.62
Can anybody help me with three.js?
I need to draw background, something, like a THREE.Sprite, but it neet to be UNDER any 3d object, that will draw later. I have a camera, that can be move only on Z axis.
I tryed to use:
cube mapping shader - PROBLEM: artefacts with shadow planes, it's unstable draw
THREE.Sprite that dublicate camera moving - PROBLEM: artefacts with shadow plane - it have a edge highlighting OR drawing only other spirtes without objects.
HTML DOM Background - PROBLEM: big and ugly aliasing in models.
What can I try more? Thanks!
You could maybe try drawing in several passes, i.e. making a first render of the background scene to a buffer, and then a second one over the first "buffer". Maybe using the buffer as background (painting it in 2D with an orthographic projection, and disabling depth buffer writes in that pass).
I haven't tried it myself with three.js, but that's how I'd do that with "traditional" OpenGL.
If you want a "3d" background i.e. something that will follow the rotation of your camera, but not react to the movement (be infinitely far), then the only way to do it is with a cubemap.
The other solution is a environment dome - a fully 3d object.
If you want a static background, then you should be able todo just a html background, i'm not sure why this would fail and what 'aliasing in models' you are talking about.