I am developing a 3D engine with WebGL and I am trying to implement shadows. My logic is as follow:
The first time that scene is rendered I loop over all meshes and create and compile the shader program (vertex and fragment shader). I only have one shader program per mesh, so, when the shader is created I need to know the lights that has the scene, the mesh's material and other considerations.
Once the shader is created I attached it to the mesh object and render the object. In the next iteration the shader is not created (because it was created previously).
I heard about shadow mapping. In order to implement it I need to render to texture and compute the distance between the light and the current fragment and this for each light source. So if I have 2 lights I need to do this process twice and then pass those textures to the shader that render the scene.
The problem is that I can create 100 ligths if I want and I would need to render 100 textures and pass it to the shader that render the scene, but OpenGL and WebGL has a limited unit textures, so I couldn't bind all textures to render the complete scene.
How can I implement shadow mapping with an arbitrary number of lights?
Related
Is there a way to import three.js material qualities and scene items (like lights) into my shader material, in order to light it and cause it to cast shadows (on itself)?
I'm using react-three-fiber, and so haven't been able to find apprpriate resources yet.
Heres my code: https://codesandbox.io/s/r3f-wavey-image-shader-forked-nm3ykn?file=/src/App.js
Lets say I have a geometry which I am using the vertices of to create Points or an InstancedMesh. But then I want change this underlying geometry to something else, let's as a cone to a sphere or something which has the same number of vertices. I would like to animated between these without using MorphTargets so I guess I need to use a custom vertex shader which is fine however I'm a bit stuck as to how to pass in the additional BufferGeometrys into the vertex shader.
I can't really think how I might do this with the uniforms - has anyone got any ideas as, in my understanding i can only use int/float/bool/vec/ivec/mat but i need multiple vertex buffers - is it just an array of some kind?
I guess i'm trying to find a way of having multiple "full" geometries which i can interrogate within the vertex shader but can't figure out how to access/pass these additional buffers into webgl from three.js
The default position of the vertices is defined in a BufferAttribute called position. This is passed to the vertex shader as attribute vec3 position.
You could create a new BufferAttribute in your geometry called position2 and you can define it in your vertex shader as attribute vec3 position2.
Raycasting selection is working fine for my project on static meshes, however for animated meshes the ray selection doesn't seem to see the movement of the mesh and only responds to the mesh's non-animated (original) position.
This is an animated model that can only pick up the pose state below the first frame and creates a small red dot when I detect the model
The bones matrices are computed by the CPU, but the new vertex positions are computed by GPU.
So the CPU has access to the first pose only.
That's why RayCasting does not work (properly) for skinned meshes.
My idea is to update the model position when updating the animation, or use GPU calculations to get the location, but I don't know how to do it. I'm looking forward to your suggestions. Thank you.
Static model
Animated model
jsfddle view
Currently, raycasting in three.js supports morph targets (for THREE.Geometry only) by replicating the vertex shader computations on the CPU.
So yes, in theory, you could add the same functionality to support raycasting for skinned meshes for both THREE.Geometry and THREE.BufferGeometry. However, a more efficient approach would be to use "GPU picking".
You can find an example of GPU picking in this three.js example. In the example, the objects are not animated, but the concept is the same.
three.js r.98
I know that we can load JSON models in WebGL, but I don't know how to animate them if we have a rigged model loaded. Is there any way of doing this without three.js?
You can animate a rigged model using THREE.js (however you seem to not want to use the built in functionality).
What THREE.js is doing in the background, is passing all the matrix transforms (an array of matrices), and per vertex it passes the bone indexes (up to 4) and bone weights to the vertex shader. In the vertex shader, it's blending between those matrices based on vertex weight and translating the vertex. So in theory you can pass values to the vertex shader to animate things. Or just use THREE.js animation routines.
It can use 2 methods to store all this data. One method uses an "image texture" which stores all those matrix and does some fancy footwork to turn the image into matrices in the vertex shader. Another method is just passing uniform matrix array (for newer graphics cards this is preferred method).
I am building an application that dynamically loads images from a server to use as textures in the scene and I am working on how to load/unload these textures properly.
My simple question is; Where, in the Three.js call graph, does textures get loaded and/or updated into the GPU? Is it when I create a texture (var tex = new THREE.Texture()) or when I apply it to a mesh (var mesh = new THREE.Mesh(geom, mat))? The Texture class of Three suggests that textures are not loaded when creating the texture. But I cannot find anything in Mesh either.
Am I missing something? Are textures loaded in the render loop rather than on object creation? That would probably make sense.
Thanks in advance!
All GPU instructions have been abstracted away to the WebGLRenderer.
This means the creation of any object within three.js will not interact with the GPU in the slightest until you call:
renderer.render(scene, camera);
This call will automatically setup all the relevant WebGL buffers, shaders, attributes, uniforms, textures, etc. So until that point in time, all three.js meshes with their materials and geometries are really just nicely abstracted objects, completely separated from the way they are rendered to the screen (why assume they will be rendered at all?).
The main reason for this is that there are other renderers, such as the CanvasRenderer, which have an entirely different API.