I'm trying to move the pixels of an image on a texture onto a set of vertices, so I could use them as a point cloud, in WebGL.
One way to do this would be to
render the texture to a framebuffer, then use gl.readPixels() to move onto a JavaScript array, then move the array back to the GPU with gl.bufferData().
Treating the data as vertices, the point cloud could be rendered with gl.drawArrays() using the gl.POINTS primitive.
But this requires that the data move from the GPU to the CPU and back again, this could become costly, especially for video.
Is there a way to move data directly from a texture to a vertex list, without leaving the GPU?
Any references, suggestions, or code examples greatly appreciated!
Thanks to Blindman67 for pointing out that you can access textures in the vertex shader, I didn't know this. You can simply use texture2D(), working examples are https://www.khronos.org/registry/webgl/conformance-suites/2.0.0/conformance/rendering/vertex-texture-fetch.html for WebGL, and http://webglsamples.org/WebGL2Samples/#texture_vertex for WebGL2. A good search term is "WebGL vertex texture fetch". Don't waste time (as I did) following old links, and trying to get calls like texture2DLodEXT() to work.
Related
I've seen shaders that create an outline around edges dynamically based on how much difference there is between the depth (distance from camera to surface) of a pixel at an edge and the depth of a pixel adjacent to it (less depth can mean a thinner outline or none at all). Like these renders:
And I'm interested in using such a shader on my three.js renders, but I think I need to figure out how to access depth data for each pixel.
Three.js documentation mentions a depth setting:
depth - whether the drawing buffer has a depth buffer of at least 16
bits. Default is true.
But I'm not sure what it means by the drawing buffer having a depth buffer. The image buffers I'm familiar with are pixel buffers, with no depth information. Where would I access this depth buffer at?
There's an example in the Three.js website that renders the scene to a THREE.WebGLRenderTarget with it's depthBuffer attribute set to true. This gives you access to depth data.
The idea is as follows:
Render the main scene to a WebGLRenderTarget. This target will contain RGB and Depth data that can be accessed via their .texture and .depthTexture attributes, accordingly.
Take these 2 textures, and apply them to a plane with custom shaders.
In the plane's custom shaders, you can access the texture data to perform whatever calculations you want to play with colors and depth.
Render the second scene (that contains only the plane) to canvas.
Here's the link to source code of that example Notice you can comment-out the code on line 73 to allow the color data to display.
Three.js already has a MeshToonMaterial, there's no need to create a new one.
https://github.com/mrdoob/three.js/blob/master/examples/webgl_materials_variations_toon.html
html, body, iframe {margin:0;width:100%;height:100%;border:0}
<iframe src="https://threejs.org/examples/webgl_materials_variations_toon.html">
I'm trying to load with three.js the same image in a large number (~ 1000) of bidimensional shapes but with different offsets in every shape.
I've taken this demo from the official website and customized it into this other demo, with all my shapes and a random background texture.
The problem is that if I clone the texture once per shape the page eats a lot of RAM and it ends up crashing.
You can see this in action by going in the javascript and changing the comments in the addShape function (you'll find the instructions in the code).
I've done some research and found some results, like this open issue or this older question where it's recommended to clone the texture; anyway nothing seems to work in my example.
Am I doing something wrong? It's changed something since these last posts about this problem?
Maybe I´m misunderstanding the problem, but why don´t you change the UV coordinates of the individual shapes to align the texture and use just one texture?
From documentation:
Geometry.faceVertexUvs
Array of face UV layers, used for mapping textures onto the geometry.
Each UV layer is an array of UVs matching the order and number of
vertices in faces.
To signal an update in this array, Geometry.uvsNeedUpdate needs to be
set to true.
Is there a good/recommended way to do image processing in the fragment shaders then export the results to an external Javascript structure?
I am currently using a Shaders Texture with THREEJS (WebGL 1.0) to display my data.
It contains an array of 2D textures as uniform. I use it to emulate a 3D texture.
At this point all my data is stored in the Fragment Shader and I want to run some image processing on the whole data (not just the pixels on screen), such as thresholding, then export the results of the segmentation to a proper JS object.
I want to do it in the shaders as it runs so much faster.
Rendering To Texturing would not help in this case (I believe) because I want to modify/update the whole 3D texture, not only what is visible on screen.
I doesn't seem that the Effect Composer from THREEJS is what I am looking for either.
Does it make sense? Am I missing something?
Is there some code/demo/literature available out there on how to do "advanced" imaging processing in the shaders (or better yet with THREEJS Shader Texture), then save out the results?
Best
You can render as usual into the canvas and then use canvas.getImageData() to retrieve the image.
Then there is the method renderer.readRenderTargetPixels() (see here). I haven't used it yet, but it appears to do what you want.
So you can just render as you described (rendering to texture won't overwrite your textures as far as i can tell) into a framebuffer (i.e. using THREE.WebGLRenderTarget) and then use that method to retrieve the image-data.
I have a PointCloud called "cloud" centered at (0,0,0) with around 1000 vertices. The vertices' positions are updated using a vertex shader. I now would like to print out each vertex's position to the console every few seconds from the render loop.
If I inspect the point cloud's vertices using
console.log(cloud.geometry.vertices[100])
in the render loop, I always get a vertex with all zeros, which I can see from all the particles zipping around is not true.
I looked at this similar post: How to get the absolute position of a vertex in three.js? and tried
var vector = cloud.geometry.vertices[100].clone();
vector.applyMatrix4( cloud.matrixWorld );
which still gave me a vector of all zeros (for all of the vertices). Using scene.matrixWorld in place of cloud.matrixWorld did not work either.
I also tried using cloud.updateMatrixWorld() and scene.updateMatrixWorld(). Lastly, I tried setting cloud.geometry.vertexNeedsUpdate = true.
So far, all of the advice I've seen has used the above, but none of them work for me. I have a feeling that the array just isn't getting updated with the correct values, but I don't know why that is. Any ideas?
That is because the vertices never change their properties on the cpu, but only on the gpu (vertex shader).
This is a one-way ticket, the communication goes cpu -> gpu not the other way around.
If you need to work with the vertex position then you have to do the calculations on the cpu and send the vertex batch everytime something changed back to the gpu.
Can anybody help me with three.js?
I need to draw background, something, like a THREE.Sprite, but it neet to be UNDER any 3d object, that will draw later. I have a camera, that can be move only on Z axis.
I tryed to use:
cube mapping shader - PROBLEM: artefacts with shadow planes, it's unstable draw
THREE.Sprite that dublicate camera moving - PROBLEM: artefacts with shadow plane - it have a edge highlighting OR drawing only other spirtes without objects.
HTML DOM Background - PROBLEM: big and ugly aliasing in models.
What can I try more? Thanks!
You could maybe try drawing in several passes, i.e. making a first render of the background scene to a buffer, and then a second one over the first "buffer". Maybe using the buffer as background (painting it in 2D with an orthographic projection, and disabling depth buffer writes in that pass).
I haven't tried it myself with three.js, but that's how I'd do that with "traditional" OpenGL.
If you want a "3d" background i.e. something that will follow the rotation of your camera, but not react to the movement (be infinitely far), then the only way to do it is with a cubemap.
The other solution is a environment dome - a fully 3d object.
If you want a static background, then you should be able todo just a html background, i'm not sure why this would fail and what 'aliasing in models' you are talking about.