Is there a good/recommended way to do image processing in the fragment shaders then export the results to an external Javascript structure?
I am currently using a Shaders Texture with THREEJS (WebGL 1.0) to display my data.
It contains an array of 2D textures as uniform. I use it to emulate a 3D texture.
At this point all my data is stored in the Fragment Shader and I want to run some image processing on the whole data (not just the pixels on screen), such as thresholding, then export the results of the segmentation to a proper JS object.
I want to do it in the shaders as it runs so much faster.
Rendering To Texturing would not help in this case (I believe) because I want to modify/update the whole 3D texture, not only what is visible on screen.
I doesn't seem that the Effect Composer from THREEJS is what I am looking for either.
Does it make sense? Am I missing something?
Is there some code/demo/literature available out there on how to do "advanced" imaging processing in the shaders (or better yet with THREEJS Shader Texture), then save out the results?
Best
You can render as usual into the canvas and then use canvas.getImageData() to retrieve the image.
Then there is the method renderer.readRenderTargetPixels() (see here). I haven't used it yet, but it appears to do what you want.
So you can just render as you described (rendering to texture won't overwrite your textures as far as i can tell) into a framebuffer (i.e. using THREE.WebGLRenderTarget) and then use that method to retrieve the image-data.
Related
I'm trying to move the pixels of an image on a texture onto a set of vertices, so I could use them as a point cloud, in WebGL.
One way to do this would be to
render the texture to a framebuffer, then use gl.readPixels() to move onto a JavaScript array, then move the array back to the GPU with gl.bufferData().
Treating the data as vertices, the point cloud could be rendered with gl.drawArrays() using the gl.POINTS primitive.
But this requires that the data move from the GPU to the CPU and back again, this could become costly, especially for video.
Is there a way to move data directly from a texture to a vertex list, without leaving the GPU?
Any references, suggestions, or code examples greatly appreciated!
Thanks to Blindman67 for pointing out that you can access textures in the vertex shader, I didn't know this. You can simply use texture2D(), working examples are https://www.khronos.org/registry/webgl/conformance-suites/2.0.0/conformance/rendering/vertex-texture-fetch.html for WebGL, and http://webglsamples.org/WebGL2Samples/#texture_vertex for WebGL2. A good search term is "WebGL vertex texture fetch". Don't waste time (as I did) following old links, and trying to get calls like texture2DLodEXT() to work.
how can I save a dom as svg file using html2canvas ?
For downlading as png , I've done something like below :
html2canvas(document.querySelector('#demo')).then(function(canvas) {
saveAs(canvas.toDataURL(), 'image.png');
});
How can I achieve similar result to save it as svg file ?
You don't.
The reason you can export to png/jpg/etc is that the canvas is a pixel graphic presentation layer, so for convenience it knows how to generate the browser-supported image types that use embedded bitmaps.
If you want vector graphics instead, then you'll need to actually draw vectors, and that means not relying on the canvas APIs. You either roll your own vector drawing instruction set (directly generating SVG yourself, or rasterizing objects to the canvas purely as presentation layer), which I would recommend against, or you use one of several vector graphics packages already out there like Paper.js, Three.js, Rafael, and so forth.
I am building an application that dynamically loads images from a server to use as textures in the scene and I am working on how to load/unload these textures properly.
My simple question is; Where, in the Three.js call graph, does textures get loaded and/or updated into the GPU? Is it when I create a texture (var tex = new THREE.Texture()) or when I apply it to a mesh (var mesh = new THREE.Mesh(geom, mat))? The Texture class of Three suggests that textures are not loaded when creating the texture. But I cannot find anything in Mesh either.
Am I missing something? Are textures loaded in the render loop rather than on object creation? That would probably make sense.
Thanks in advance!
All GPU instructions have been abstracted away to the WebGLRenderer.
This means the creation of any object within three.js will not interact with the GPU in the slightest until you call:
renderer.render(scene, camera);
This call will automatically setup all the relevant WebGL buffers, shaders, attributes, uniforms, textures, etc. So until that point in time, all three.js meshes with their materials and geometries are really just nicely abstracted objects, completely separated from the way they are rendered to the screen (why assume they will be rendered at all?).
The main reason for this is that there are other renderers, such as the CanvasRenderer, which have an entirely different API.
Basically I have a model(created dynamically using some inputs) on webgl in the form of array of vertices and an array of indices(indicating the surface to be drawn using the vertices from array of vertices). An array of colors of the vertices.
I need to save this in vrml format for 3D printing. How can I achieve this?
Convert the list of vertices and list of indices into a simple mesh format like OFF or PLY. And then use MeshLab to load the mesh and export it to VRML.
If what you are looking for is a way to automate the conversion to VRML from the same kind of input, then you can probably write the converter yourself by inspecting how a simple box mesh is exported to VRML by MeshLab: the VRML file still contains a list of vertices and a list of indices (among other things). You could also rely on the C++ library VCGLIB (the library used by MeshLab), although given how simple the VRML exporter is, it is unlikely to be worth doing it.
Finally, since this mesh is to be used for 3D printing, you may have other requirements (do hole-filling for instance) than the export format. In which case, VCGLIB may actually come in handy to apply mesh processing operations.
Can anybody help me with three.js?
I need to draw background, something, like a THREE.Sprite, but it neet to be UNDER any 3d object, that will draw later. I have a camera, that can be move only on Z axis.
I tryed to use:
cube mapping shader - PROBLEM: artefacts with shadow planes, it's unstable draw
THREE.Sprite that dublicate camera moving - PROBLEM: artefacts with shadow plane - it have a edge highlighting OR drawing only other spirtes without objects.
HTML DOM Background - PROBLEM: big and ugly aliasing in models.
What can I try more? Thanks!
You could maybe try drawing in several passes, i.e. making a first render of the background scene to a buffer, and then a second one over the first "buffer". Maybe using the buffer as background (painting it in 2D with an orthographic projection, and disabling depth buffer writes in that pass).
I haven't tried it myself with three.js, but that's how I'd do that with "traditional" OpenGL.
If you want a "3d" background i.e. something that will follow the rotation of your camera, but not react to the movement (be infinitely far), then the only way to do it is with a cubemap.
The other solution is a environment dome - a fully 3d object.
If you want a static background, then you should be able todo just a html background, i'm not sure why this would fail and what 'aliasing in models' you are talking about.