As far as I know, it is called projective texture mapping. Are there any library methods to project primitive 2D shapes (lines mostly) to a texture?
This threejs example looks close to what I need. I tried replacing the decal texture (decalMaterial) with
THREE.LineBasicMaterial
but I get square instead of lines.
Drawing a line onto a 3D texture is possible. But are you wanting that or just a decal on your model? There are other ways of putting a decal that might be better, but I'll assume you want to paint onto the texture. I've done it in Unity, but not WebGL and am not familiar with WebGL's limitations. It was a program that let's you paint onto 3D models like in Substance Painter, ZBrush, etc. I recommend doing this in a non-destructive manner at runtime so do this to a separate render texture then just combine the 2 textures for rendering in your final model.
To do this you are going to need to have to render your model into texture space. So in your vertex shader output the uv of the model as your position.
Been writing hlsl and Cg a lot recently so glsl is super rusty. Treat this more like psuedocode.
//Vertex Shader
in vec4 position;
in vec2 uv;
out vec4 fworldPos;
out vec2 fuv;
uniform mat4 transform;
void main()
{
//We use the uv position of the model so we can draw into
//its texture space.
gl_Position = vec4(uv.x*2.0-1.0,uv.y*2.0-1.0,0.0,1.0);
//Give the fragment shader the uv
fuv = uv;
//We will need to do per pixel collision detection with
//the line or object so we need the world position.
fworldPos = transform*position
}
You then need to do collision detection with your brush in the pixel shader and see if that world position is contained with your brush. Again though this is if you want to paint on it. Project I did we used raycasts onto the model so the brush would follow along the surface and do the above easily. This will give you a very rough looking brush and you'll need to add a lot of parameters to adjust how the brush looks like falloff strength, falloff radius, etc. Basically all the parameters you see in other painting 3D software.
You can do this with plane's, boxes, spheres, lines, whatever. Just need to test if the world position is contained within your brush. Will need different shaders for different types of brushes though.
Related
Lets say I have a geometry which I am using the vertices of to create Points or an InstancedMesh. But then I want change this underlying geometry to something else, let's as a cone to a sphere or something which has the same number of vertices. I would like to animated between these without using MorphTargets so I guess I need to use a custom vertex shader which is fine however I'm a bit stuck as to how to pass in the additional BufferGeometrys into the vertex shader.
I can't really think how I might do this with the uniforms - has anyone got any ideas as, in my understanding i can only use int/float/bool/vec/ivec/mat but i need multiple vertex buffers - is it just an array of some kind?
I guess i'm trying to find a way of having multiple "full" geometries which i can interrogate within the vertex shader but can't figure out how to access/pass these additional buffers into webgl from three.js
The default position of the vertices is defined in a BufferAttribute called position. This is passed to the vertex shader as attribute vec3 position.
You could create a new BufferAttribute in your geometry called position2 and you can define it in your vertex shader as attribute vec3 position2.
I have been trying to figure out a way of adding thickness to lines that can receive shadows and look like solid objects using Three.js but the best result I managed to get so far is just thicker lines that do not look like 3D geometry.
The application is for an online 3D printing platform so I am trying to visualise the sliced geometry that is comprised of lines, similar to how other slicing software handles this, such as cura, as shown in the image below.
Generating mesh geometry from these lines would be most probably problematic as in some cases there are thousands of a lines in a single model so it will be too heavy.
Any suggestions on how to achieve the desired result in either three.js or another javascript library would be greatly appreciated!
So the idea is to render primitive covering your thick line area and in fragment decide if fragment is inside or outside the thick line compute 3D position and normal and render or discard; if not.
The idea is to pass polyline geometry for rendering to OpenGL that would produce just thin lines and use shaders to do the rest.
Vertex shader
will just pass stuff into geometry shader
Geometry shader
will take in 2 vertexes (line) and output 2 triangles (quad) covering line BBOX (line enlarged by line half thickness). This is relatively easy. Simply shift the line endpoints by perpendicular vector to the line of size equal to the half thickness. This must be done in plane parallel with camera screen plane (using basis vectors extracted from direct camera matrix). Do not forget to pass both vertexes in world and camera coordinates.
Fragment shader
simply from world coordinates test if point is inside your thick line:
so simply compute P' and compute distance between P,P'. That is called perpendicular distance between point and line. Its doable exploiting dot product IIRC:
t = dot(P-P0,P1-P0)
P' = P0 + t*(P1-P0)
d = |P'-P0|
from that you just compute the 3D coordinate (depth of the fragment), normal and either render with some directional light or discard;...
Take a look at full example of this technique for 2D cubic curves:
rendering thick 2D Cubics in GLSL
I have a vertex shader and fragment shader using the flat qualifier, and for complicated hacky reasons explained near the end of https://www.youtube.com/watch?v=l6PEfzQVpvM, I want to change the WebGL flat shading provoking vertex, which is the vertex number that WebGL will use when passing varying colors. (my shader is like this):
// ...
flat out vec3 pass_vertexColor;
// ...
and I know in OpenGL you can change the provoking vertex like this:
glProvokingVertex(1 /* 2, 3 */); // i think
But I don't think there is an equivalent in WebGL yet. So my question is:
What is the default WebGL Provoking Vertex?
WebGL 1.0 does not support flat shading. See this answer for a workaround.
WebGL 2.0 has flat shading. It has the same default as regular OpenGL - the provoking vertex is the last vertex of the triangle. It appears this cannot be changed in WebGL 2.0 as ProvokingVertex() is not exposed through the WebGL 2.0 API.
Note that even in desktop OpenGL, the provoking vertex can either be the first or last vertex of the triangle (with last being the default). It cannot be the middle vertex.
I am trying to get back and export the mesh that is being displaced by a displacementMap.
The shader is transforming vertexes according to this line (from
three.js/src/renderers/shaders/ShaderChunk/displacementmap_vertex.glsl):
transformed += normalize( objectNormal ) * ( texture2D( displacementMap, uv ).x * displacementScale + displacementBias );
This is displacing a vertex according to the displacementMap, mixed with the uv coordinates for that vertex.
I am trying to create this mesh/geometry so that I can then later export it.
I have created a "demo" of the problem here:
Github Page
I would like the displaced mesh, as seen in the viewport, up on pressing exportSTL. However I am only getting the undisplaced plane.
I understand why this happens, the displacement only happens in the shader and is not really displacing the geometry of the plane directly.
I have not found a method provided by three.js and so far have not found any way in getting the changes from the shader.
So I am trying to do it with a function in the "demo.js".
However, I am a WebGL/three.js newbie and have problems re-creating what the shader does.
I have found exporters handling morphTargets, but these are of no help.
After reading this question I tried PlaneBufferGeometry, as this is closer to the shader - but this produces the same results for me.
I think this question originally tried to produce something similar, but accepted an unrelated question.
In the end I would like to draw on a HTML-canvas which then updates the texture in real time (I have this part working). The user can then export the mesh for 3d printing.
Is there a way three.js can give me the modified geometry of the shader?
Or can someone help me translate the shader line in to a "conventional" Three.js function?
Maybe this is totally the wrong approach to get a displaced mesh?
Update - Example is working
Thanks to the example from DeeFisher I can now calculate the displacement in CPU, as originally suggested by imerso.
If you click on the Github Page now, you will get a working example.
At the moment I do not fully understand why I have to mirror the canvas to get the correct displacement in the end, but this is at worst a minor nuissance.
To do that while still using a shader for the displacement, you will need to switch to WebGL2 and use Transform-Feedback (Google search: WebGL2 Transform-Feedback).
An alternative would be to read the texture back to CPU, and scan it while displacing the vertices using CPU only (Google search: WebGL readPixels).
Both alternatives will require some effort, so no code sample at this time. =)
BABYLON.js can be used in conjunction with THREE.js and it allows you to displace the actual mesh vertices when applying displacement maps:
var sphere = BABYLON.Mesh.CreateSphere("Sphere", 64, 10, scene, true);
sphere.applyDisplacementMap(url, minHeight, maxHeight, onSuccess, uvOffset, uvScale)
See an example of the function in use here.
You can then use a for to loop transfer the BABYLON mesh data into a THREE mesh object.
I know that we can load JSON models in WebGL, but I don't know how to animate them if we have a rigged model loaded. Is there any way of doing this without three.js?
You can animate a rigged model using THREE.js (however you seem to not want to use the built in functionality).
What THREE.js is doing in the background, is passing all the matrix transforms (an array of matrices), and per vertex it passes the bone indexes (up to 4) and bone weights to the vertex shader. In the vertex shader, it's blending between those matrices based on vertex weight and translating the vertex. So in theory you can pass values to the vertex shader to animate things. Or just use THREE.js animation routines.
It can use 2 methods to store all this data. One method uses an "image texture" which stores all those matrix and does some fancy footwork to turn the image into matrices in the vertex shader. Another method is just passing uniform matrix array (for newer graphics cards this is preferred method).