I am building an application that dynamically loads images from a server to use as textures in the scene and I am working on how to load/unload these textures properly.
My simple question is; Where, in the Three.js call graph, does textures get loaded and/or updated into the GPU? Is it when I create a texture (var tex = new THREE.Texture()) or when I apply it to a mesh (var mesh = new THREE.Mesh(geom, mat))? The Texture class of Three suggests that textures are not loaded when creating the texture. But I cannot find anything in Mesh either.
Am I missing something? Are textures loaded in the render loop rather than on object creation? That would probably make sense.
Thanks in advance!
All GPU instructions have been abstracted away to the WebGLRenderer.
This means the creation of any object within three.js will not interact with the GPU in the slightest until you call:
renderer.render(scene, camera);
This call will automatically setup all the relevant WebGL buffers, shaders, attributes, uniforms, textures, etc. So until that point in time, all three.js meshes with their materials and geometries are really just nicely abstracted objects, completely separated from the way they are rendered to the screen (why assume they will be rendered at all?).
The main reason for this is that there are other renderers, such as the CanvasRenderer, which have an entirely different API.
Related
I'm trying to move the pixels of an image on a texture onto a set of vertices, so I could use them as a point cloud, in WebGL.
One way to do this would be to
render the texture to a framebuffer, then use gl.readPixels() to move onto a JavaScript array, then move the array back to the GPU with gl.bufferData().
Treating the data as vertices, the point cloud could be rendered with gl.drawArrays() using the gl.POINTS primitive.
But this requires that the data move from the GPU to the CPU and back again, this could become costly, especially for video.
Is there a way to move data directly from a texture to a vertex list, without leaving the GPU?
Any references, suggestions, or code examples greatly appreciated!
Thanks to Blindman67 for pointing out that you can access textures in the vertex shader, I didn't know this. You can simply use texture2D(), working examples are https://www.khronos.org/registry/webgl/conformance-suites/2.0.0/conformance/rendering/vertex-texture-fetch.html for WebGL, and http://webglsamples.org/WebGL2Samples/#texture_vertex for WebGL2. A good search term is "WebGL vertex texture fetch". Don't waste time (as I did) following old links, and trying to get calls like texture2DLodEXT() to work.
EDIT:
I had a question about exporting to obj and mtl but discovered that I could export from three.js using GLTFExporter.js and had success getting the geometry and texture out of three.js from that.
The issue I'm having with the GLTF Exporter is that I have textures that have offset and repeat settings that seem to not be exported from three.js when I open the file in Blender. In Blender the whole texture takes up the MeshPlane that used to only have a small part of the texture showing in Three.js scene.
Might anyone know what I could add to the GLTF Exporter to be able to record and keep the repeat and offset texture settings?
Many Thanks :)
I've hit this myself.. and as far as I know, the answer is No.
Offset and Repeat are THREE.js specific features. Some other libraries have equivalents.. some engines use direct texture matrix manipulation to achieve the same effect.
One workaround is to modify your models UV coordinates before exporting to reflect the settings of texture.offset and texture.repeat.
You would basically multiply each vertex UV by the texture.repeat, and then add texture.offset. That would effectively "bake" those parameters into the model UV's, but then would require you to reset .repeat and .offset back to 1,1 and 0,0 respectively, in order to render the model correctly again in THREE.js.
Here's a slightly relevant thread from the GLTF working group:
https://github.com/KhronosGroup/glTF/issues/107
I am developing a 3D engine with WebGL and I am trying to implement shadows. My logic is as follow:
The first time that scene is rendered I loop over all meshes and create and compile the shader program (vertex and fragment shader). I only have one shader program per mesh, so, when the shader is created I need to know the lights that has the scene, the mesh's material and other considerations.
Once the shader is created I attached it to the mesh object and render the object. In the next iteration the shader is not created (because it was created previously).
I heard about shadow mapping. In order to implement it I need to render to texture and compute the distance between the light and the current fragment and this for each light source. So if I have 2 lights I need to do this process twice and then pass those textures to the shader that render the scene.
The problem is that I can create 100 ligths if I want and I would need to render 100 textures and pass it to the shader that render the scene, but OpenGL and WebGL has a limited unit textures, so I couldn't bind all textures to render the complete scene.
How can I implement shadow mapping with an arbitrary number of lights?
Is there a good/recommended way to do image processing in the fragment shaders then export the results to an external Javascript structure?
I am currently using a Shaders Texture with THREEJS (WebGL 1.0) to display my data.
It contains an array of 2D textures as uniform. I use it to emulate a 3D texture.
At this point all my data is stored in the Fragment Shader and I want to run some image processing on the whole data (not just the pixels on screen), such as thresholding, then export the results of the segmentation to a proper JS object.
I want to do it in the shaders as it runs so much faster.
Rendering To Texturing would not help in this case (I believe) because I want to modify/update the whole 3D texture, not only what is visible on screen.
I doesn't seem that the Effect Composer from THREEJS is what I am looking for either.
Does it make sense? Am I missing something?
Is there some code/demo/literature available out there on how to do "advanced" imaging processing in the shaders (or better yet with THREEJS Shader Texture), then save out the results?
Best
You can render as usual into the canvas and then use canvas.getImageData() to retrieve the image.
Then there is the method renderer.readRenderTargetPixels() (see here). I haven't used it yet, but it appears to do what you want.
So you can just render as you described (rendering to texture won't overwrite your textures as far as i can tell) into a framebuffer (i.e. using THREE.WebGLRenderTarget) and then use that method to retrieve the image-data.
Currently I'm working with ThreeJS and I need to combine the shadermaterial, because I'm using a custom shader that combines several textures into a single one, with the meshphongmaterial, since i don't want to lose all the work (lights and reflection) that the shader from meshphongmaterial does.
Is there a way to do this?
The solution was rather easy, I just took the shader code from the phong material and added my custom code in the section the texel variable is assigned.