I'm trying to load with three.js the same image in a large number (~ 1000) of bidimensional shapes but with different offsets in every shape.
I've taken this demo from the official website and customized it into this other demo, with all my shapes and a random background texture.
The problem is that if I clone the texture once per shape the page eats a lot of RAM and it ends up crashing.
You can see this in action by going in the javascript and changing the comments in the addShape function (you'll find the instructions in the code).
I've done some research and found some results, like this open issue or this older question where it's recommended to clone the texture; anyway nothing seems to work in my example.
Am I doing something wrong? It's changed something since these last posts about this problem?
Maybe I´m misunderstanding the problem, but why don´t you change the UV coordinates of the individual shapes to align the texture and use just one texture?
From documentation:
Geometry.faceVertexUvs
Array of face UV layers, used for mapping textures onto the geometry.
Each UV layer is an array of UVs matching the order and number of
vertices in faces.
To signal an update in this array, Geometry.uvsNeedUpdate needs to be
set to true.
Related
I'm trying to move the pixels of an image on a texture onto a set of vertices, so I could use them as a point cloud, in WebGL.
One way to do this would be to
render the texture to a framebuffer, then use gl.readPixels() to move onto a JavaScript array, then move the array back to the GPU with gl.bufferData().
Treating the data as vertices, the point cloud could be rendered with gl.drawArrays() using the gl.POINTS primitive.
But this requires that the data move from the GPU to the CPU and back again, this could become costly, especially for video.
Is there a way to move data directly from a texture to a vertex list, without leaving the GPU?
Any references, suggestions, or code examples greatly appreciated!
Thanks to Blindman67 for pointing out that you can access textures in the vertex shader, I didn't know this. You can simply use texture2D(), working examples are https://www.khronos.org/registry/webgl/conformance-suites/2.0.0/conformance/rendering/vertex-texture-fetch.html for WebGL, and http://webglsamples.org/WebGL2Samples/#texture_vertex for WebGL2. A good search term is "WebGL vertex texture fetch". Don't waste time (as I did) following old links, and trying to get calls like texture2DLodEXT() to work.
I am working on a system to procedurally build meshes for "mines", right now I don't want to achieve visual perfection I am more focused on the basic.
I got the point in which I am able to generate the shape of the mines and from that generating the 2 meshes, one for the ground and one for the "walls" of the mine.
Now I am working on getting the UV mapping right but my problem is that the ground is really hard to map to UV coordinates properly and I am currently not able to get it right.
For the tessellation I am using a constrained version of the delaunay triangulation to which I added a sub-tessellation what simply splits the triangles at least once and keeps splitting them if the area of the triangle is greater than X.
Here a 2D rendering of the tessellation that highlight the contours, the triangles and the edges
Here the result of the 3D rendering (using three.js and webgl) with my current UV mapping applied (a displacement map as well, please ignore it for now).
I am taking a naive approach to the UV mapping, each vertex of a triangle in the grid is translated to values between 0 and 1 and that's it.
I think that, in theory should be right, but the issue is with the order of the vertexes that is creating a problem but if that would be the case the texture should be shown rotated or oddly not just oddly AND stretched like that.
Once I will get the UV mapping right, the next step would be to correctly implement the
I am currently writing this in javascript but any hint or solution in any language would be alright, I don't mind converting and/or re-engineering it to make it work.
My goal is to be able to procedurally build the mesh, send it to multiple clients and achieve the same visual rendering. I need to add quite a few bits and pieces after this other step is implemented so I can't rely on shaders on the client side because otherwise being able to place tracks, carts or something else on the ground would just be impossible for the server.
Once I will get these things sorted out, I will switch to Unity 3D for the rendering on the client side, webgl and three.js are currently being used just to have a quick and easy way to view what's being produced without the need of a client/server whole infrastructure.
Any suggestion?
Thanks!
I sorted out the issue in my code, it was pretty stupid though: by mistake I was adding 3 UV mappings per triangle and not 1 per point causing an huge visual mess. Sorted out that, I was able to achieve what I needed!
https://www.youtube.com/watch?v=DHF4YWYG7FM
Still a lot of work to do but starts to look decent!
I am trying to get back and export the mesh that is being displaced by a displacementMap.
The shader is transforming vertexes according to this line (from
three.js/src/renderers/shaders/ShaderChunk/displacementmap_vertex.glsl):
transformed += normalize( objectNormal ) * ( texture2D( displacementMap, uv ).x * displacementScale + displacementBias );
This is displacing a vertex according to the displacementMap, mixed with the uv coordinates for that vertex.
I am trying to create this mesh/geometry so that I can then later export it.
I have created a "demo" of the problem here:
Github Page
I would like the displaced mesh, as seen in the viewport, up on pressing exportSTL. However I am only getting the undisplaced plane.
I understand why this happens, the displacement only happens in the shader and is not really displacing the geometry of the plane directly.
I have not found a method provided by three.js and so far have not found any way in getting the changes from the shader.
So I am trying to do it with a function in the "demo.js".
However, I am a WebGL/three.js newbie and have problems re-creating what the shader does.
I have found exporters handling morphTargets, but these are of no help.
After reading this question I tried PlaneBufferGeometry, as this is closer to the shader - but this produces the same results for me.
I think this question originally tried to produce something similar, but accepted an unrelated question.
In the end I would like to draw on a HTML-canvas which then updates the texture in real time (I have this part working). The user can then export the mesh for 3d printing.
Is there a way three.js can give me the modified geometry of the shader?
Or can someone help me translate the shader line in to a "conventional" Three.js function?
Maybe this is totally the wrong approach to get a displaced mesh?
Update - Example is working
Thanks to the example from DeeFisher I can now calculate the displacement in CPU, as originally suggested by imerso.
If you click on the Github Page now, you will get a working example.
At the moment I do not fully understand why I have to mirror the canvas to get the correct displacement in the end, but this is at worst a minor nuissance.
To do that while still using a shader for the displacement, you will need to switch to WebGL2 and use Transform-Feedback (Google search: WebGL2 Transform-Feedback).
An alternative would be to read the texture back to CPU, and scan it while displacing the vertices using CPU only (Google search: WebGL readPixels).
Both alternatives will require some effort, so no code sample at this time. =)
BABYLON.js can be used in conjunction with THREE.js and it allows you to displace the actual mesh vertices when applying displacement maps:
var sphere = BABYLON.Mesh.CreateSphere("Sphere", 64, 10, scene, true);
sphere.applyDisplacementMap(url, minHeight, maxHeight, onSuccess, uvOffset, uvScale)
See an example of the function in use here.
You can then use a for to loop transfer the BABYLON mesh data into a THREE mesh object.
So I found out that texturing planets can be really hard. I created a 4096k image and wrapped it around a high poly sphere. Apart from the possible memory management performance issue that comes with a 3-4 mb image, the texture looks bad / pixelated on a close up (orbital) view.
I was thinking that I could maybe increase the resolution significantly by splitting up the picture. Then create a low, medium and high version of each section. If the camera viewport is very close to that particular section then render the high resolution image. If far away remove image from memory and apply low or medium version.
To be honest I am not sure what strategy to use to render high quality planets. Should I maybe avoid textures and just use height maps and color the planet with Javascript? Same thing for the clouds. Should I create a sphere with an alpha map or should I use shaders?
As you can see this is the issue im having and hopefully you could enlighten me. Performance with Webgl / three.js has significantly improved over time but since this is all done within the browser I assume thinking about the right solution is vital in the long term.
You're going to need to implement a lod system. lod = "level of detail" and in 3d it means generally means switching from high-polygon to low-polygon models but in general it means doing anything to switch high detail to low-detail
Because you can't make textures 1000000x100000 which is pretty much what you'd need to do to get the results you want you'll need build a sphere out of multiple sections and texture each one separately. How many sections depends on how close you want to be able to zoom in. Google Maps has millions of sections. At the same time, if you can zoom out to see the whole planet (like you can in Google Maps) you can't draw millions of sections. Instead you'd switch to a single sphere. That process is called "LODing"
There is no "generic" solution. You generally need to write your own for your specific case.
In the case of something like Google Maps what they most likely do is have several levels of detail. A single sphere when you can see the whole planet. A sphere made of say 100 pieces when slightly closer. A sphere made of 1000 pieces when closer, A sphere made of 10000 pieces when closer, etc etc. They also only show the pieces you can see. Deciding and managing which pieces to show with a generic solution would be way to slow (look at millions of pieces every frame) but you, as the application writer know what pieces are visible so you can make sure only those pieces are in your scene.
Another thing that people often do is fade between LODs. So when Google Maps is showing the single mesh sphere when all the say zoomed out and they transition to the 100 piece or 1000 piece sphere they crossfade between the two.
Some examples of lodding
http://acko.net/blog/making-worlds-1-of-spheres-and-cubes/
http://vterrain.org/LOD/spherical.html
You could create a sphere with different topology.
Say you create 6 square planes, arranged in such a way that they form a box. You can tesselate these planes to give the sphere enough resolution. These planes would have UV mapping that would work similar to cube-mapping, each will hold a cubemap face.
Then you loop through all the vertices, take the position vector and normalize it. This will yield a sphere.
You can convert an equirectangular panorama image into a cubemap. I think it will allow you to to get more resolution and less stretching for cheap.
For starters, the 4096 x 4096 should be 4096x2048 on the default sphere with equirectangular, but the newly mapped sphere can hold 6 x 4096 x 4096 with no stretching, and can be drawn in 6 draw calls.
Further splitting these could yield a good basis for what gman suggests.
Can anybody help me with three.js?
I need to draw background, something, like a THREE.Sprite, but it neet to be UNDER any 3d object, that will draw later. I have a camera, that can be move only on Z axis.
I tryed to use:
cube mapping shader - PROBLEM: artefacts with shadow planes, it's unstable draw
THREE.Sprite that dublicate camera moving - PROBLEM: artefacts with shadow plane - it have a edge highlighting OR drawing only other spirtes without objects.
HTML DOM Background - PROBLEM: big and ugly aliasing in models.
What can I try more? Thanks!
You could maybe try drawing in several passes, i.e. making a first render of the background scene to a buffer, and then a second one over the first "buffer". Maybe using the buffer as background (painting it in 2D with an orthographic projection, and disabling depth buffer writes in that pass).
I haven't tried it myself with three.js, but that's how I'd do that with "traditional" OpenGL.
If you want a "3d" background i.e. something that will follow the rotation of your camera, but not react to the movement (be infinitely far), then the only way to do it is with a cubemap.
The other solution is a environment dome - a fully 3d object.
If you want a static background, then you should be able todo just a html background, i'm not sure why this would fail and what 'aliasing in models' you are talking about.