Changing geometry/material of mesh with distance - javascript

I'm making a game with three.js in which an mesh will be rendered in greater detail as the player gets closer (i.e different geometry and material). This seems like it would be a common requirement in open world type games, and I was wondering what standard procedure is in three.js
It looks like changing the geometry of a mesh is quite computationally costly, so should I just store a different mesh for each distance, or is there an existing data structure for this purpose?

You want to use level-of-detail (LOD) modeling.
There is an example of that in the following three.js example: http://threejs.org/examples/webgl_lod.html
In the example, press the 'W/S' keys to see the effect.
three.js r.62

Related

What is the correct way to render webXR AR scenes using THREE.js?

I'm pretty new with THREE.js and webXR. The current spec is very confusing to me. the webXR API examples don't show any way to use the THREE.js renderer, and the examples from Three seem to be in a "in between" stage of using webVR vs using webXR.
The current way that I'm aware of is to set the session of renderer.vr to a session that has been requested. While doing this does get me the 3d scene with the camera background, I'm at a loss when it comes to how the scene actually functions. Using requestHitTest() on that session doesn't seem to function, and my default cube is floating somewhere beneath the floor.
When I log the position of the camera while using the demo, this also comes back as 0,0. Is the camera not being moved?
Is there any good place to get accurate examples on how to get these functions to work?

How does threejs place a mesh into the scene without any coordinates given?

I have began to create a personal project by using threejs and I would like someone to explain how threejs add method works when adding to the scene.
For example, from a small example, a scene is defined and a camera is created while specifying the near and far plane from the units 0.1 to 1000. That is all that is defined into the world and the mesh is added into the scene by calling scene.add(cube).
How is the cube added into the scene without any coordinates given?
On this note, does anybody have a good link to explain the coordinate systems used for threejs/opengl? Many thanks.
When it comes to dipping the toe into the pool of threejs, a few resources that I found helpful getting started were https://discoverthreejs.com/book/first-steps/first-scene/ and https://threejs.org/docs/#api/en/core/Object3D.
In the case of the latter, https://threejs.org/docs/#api/en/core/Object3D.position specifies the defaults when placing an object into the scene.
Hope this helps.

three.js Raycaster does seem to work on the basis of the first key frame of animation all the time

Raycasting selection is working fine for my project on static meshes, however for animated meshes the ray selection doesn't seem to see the movement of the mesh and only responds to the mesh's non-animated (original) position.
This is an animated model that can only pick up the pose state below the first frame and creates a small red dot when I detect the model
The bones matrices are computed by the CPU, but the new vertex positions are computed by GPU.
So the CPU has access to the first pose only.
That's why RayCasting does not work (properly) for skinned meshes.
My idea is to update the model position when updating the animation, or use GPU calculations to get the location, but I don't know how to do it. I'm looking forward to your suggestions. Thank you.
Static model
Animated model
jsfddle view
Currently, raycasting in three.js supports morph targets (for THREE.Geometry only) by replicating the vertex shader computations on the CPU.
So yes, in theory, you could add the same functionality to support raycasting for skinned meshes for both THREE.Geometry and THREE.BufferGeometry. However, a more efficient approach would be to use "GPU picking".
You can find an example of GPU picking in this three.js example. In the example, the objects are not animated, but the concept is the same.
three.js r.98

3D cube creation and handling events on that cube

I have created multiple 3D Cubes in a HTML5 Canvas .
I was trying to handle a click event on the 3D cube so that i can know which cube got clicked.
To create the cube I used processingJS.
It worked well but was not able to get the Click position.
I read about Paper JS which creates a shape and stores it in a object.
Is it possible to create 3D things with Paper JS.
Or Is there anyway i can get which cube got clicked through ProcessingJS.
please share whether there are any other ways to do this.
Thanks in advance.
Paper.js deals with 2D vector graphics.
While in theory you can represent a cube if you want to, using skewed squares for example, it will take a lot of effort and a lot of time to just create 1 cube.
You are much better of using a 3D library, e.g - Three.js.
Here is an already cooked up example using raycasting to detect the clicks on a side of the cube: http://mrdoob.github.io/three.js/examples/canvas_interactive_cubes.html

Camera Calibration- Tsai's Algorithn javascript three.js implimentation

I'm trying to build a camera calibration function for a three.js app and could very much do with some help.
I have a geometry in a 3d scene, lets say for this example it is a cube. As this object and scene exist in 3d, all properties of the cube are known.
I also have a physical copy of the geometry.
What I would like to is take a photo of the physical geometry and mark 4+ points on the image in x and y. These points correspond to 4+ points in the 3d geometry.
Using these point correlations, I would like to be able to work out the orientation of the camera to the geometry in the photo and then match a virtual camera to the 3d geometry in the three.js scene.
I looked into the possibility of using an AR lib such as JS-aruco or JSARToolkit. But these systems need a marker, where as my system need to be marker less. The user will choose the 4 (or more) points on the image.
I've been doing some research and identified that Tsai's algorithm for camera alignment should suit my needs.
While I have a good knowledge of javascript and three.js, My linear algebra isn't the best and hence am having a problem translating the algorithm into javascript.
If anyone could give me some pointers, or is able to explain the process of Tsai's algorithm in a javascript mannor I would be so super ultra thankful.

Categories