I'm looking to project a texture onto the surface of a mesh in ThreeJS.
https://www.lanyardmarket.com/en/printed-tshirt
This link achieves the result i'm looking for however i'm not sure how they achieved it.
.
I'll update this post as I research however if anyone knows how to project a ThreeJS texture onto a mesh i'd love to know.
Thanks
Working example you may find here: https://jsfiddle.net/mmalex/pcjbysn1/
BufferGeometry stores texture coordinates in 'uv' attribute, you can add it with BufferGeometry.addAttribute and access it through geom.attributes.uv.array.
let uvcoords = [];
let vertexCount = geom.attributes.position.array.length / 3;
// allocate array of UV coordinates (2 floats per each vertex)
uvcoords.length = 2 * vertCount;
if (geom.attributes.uv === undefined) {
geom.addAttribute('uv', new THREE.Float32BufferAttribute(uvcoords, 2));
}
Now all you need is to "project" mesh vertices onto some 3D plane. These projection coordinates will appear your UV coordinates.
In general case, you would need to do Plane.projectPoint for each vertex. This approach is straightforward and can be optimized with pre-rotating the mesh so that vertex x and y components become u and v accordingly. This you will find in my jsfiddle.
Related
I want to make portals with threejs by drawing an ellipse and then texture mapping a WebGlRenderTarget to its face. I have that function sort of working, but it tries to stretch the large rectangular buffer from the render target to the ellipse. What I want is to project the texture in its original dimensions onto the ellipse and just cut out anything that doesn't hit the ellipse like so:
Before Projection:
After projection:
How can this be done with threejs?
I've looked into texture coordinates, but don't understand how to use them, and even saw a projection light PR in threejs that might work?
Edit: I also watched a Sebastian Lague video on portals and saw he does this with “screen space coordinates”. Any advice on using those?
Thanks for your help!
Made a codepen available here:
https://codepen.io/cdeep/pen/JjyjOqY
UV mapping lets us specify which parts of the texture correspond to which vertices of the geometry. More details here: https://www.spiria.com/en/blog/desktop-software/understanding-uv-mapping-and-textures/
You could loop through the vertices and set the corresponding UV value.
const vertices = ellipseGeometry.attributes.position.array;
for(let i = 0; i < numPoints; i++) {
const [x, y] = [vertices[3*i], vertices[3*i + 1]];
uvPositions.push(0.5 + x * imageHeight / ((2 * yRadius) * imageWidth));
uvPositions.push(0.5 + y / (2 * yRadius));
}
ellipseGeometry.setAttribute("uv", new THREE.Float32BufferAttribute(uvPositions, 2 ));
UV coordinates increase from (0, 0) to (1, 1) from bottom left to top right.
The above code works because the ellipse is on the x-y plane. Or else, you'll need to get the x,y values in the plane of the ellipse.
More info on texture mapping in three.js here:
https://discoverthreejs.com/book/first-steps/textures-intro/
Edit: Do note that the demo doesn't really look like a portal. For that, you'll need to move the texture based on the camera view which isn't that simple
Adding new vertices to a three.js mesh goes by mesh.geometry.vertices.push(new THREE.Vector3(x, y, z)), but how do I remove them?
"geometry" is an array, so I thought, I could remove vertices with:
mesh.geometry.vertices.splice(vertexIndex, 1)
mesh.geometry.verticesNeedUpdate = true;
But when I do that, that whole thing breaks with three.js internal error messages that say: "Uncaught TypeError: Cannot read property 'x' of undefined" inside three.min.js.
I searched their wiki, their github issues. And can't find an answer to this. The mesh is a simple BoxGeometry, so not even a custom one.
In threejs each face is made of 3 vertices. Here is an example to make it clearer. Here is how you create a geometry in r71 :
geometry=new THREE.Geometry();
geometry.vertices.push(// few vertices with random coordinates
new THREE.Vector3(12,15,5),//index:0 -- the numbers are (x,y,z) coordinates
new THREE.Vector3(10,15,5),//index:1
new THREE.Vector3(12,10,2),//index:2
new THREE.Vector3(10,10,2)//index:3
);
geometry.faces.push(
new THREE.Face3(0,1,2),//those numbers are indices of vertices in the previous array
new THREE.Face3(0,3,2)
);
geometry.computeFaceNormals();// we won't care about this here
(I did not care about the values so i do not know which shape it can give)
What you can see is that two arrays are built : vertices and faces. Now what happens at each frame is that each face is 'drawed' with the position of its vertices.
You ask what is wrong by deleting a vertex in the geometry.vertices array : let's imagine the second vertex above is deleted. The array now looks like this :
geometry.vertices=[
THREE.Vector3(12,15,5),//index:0
THREE.Vector3(12,10,2),//new index:1
THREE.Vector3(10,10,2)//new index:2
];
There is no more vertex at index 3. So when the GPU will draw the next frame, if a face points to it (here the second face) it will try to access its coordinates (first x before y and z). That is why the console returns that it cannot read x of undefined.
Here was a long explanation of the error. You can see the vertex deletion also shifted the array so faces do not have the correct shape, and their normals do not correspond anymore. The worst is that the buffer will have to change and that is simply not allowed, as stated there for example :
Dynamically Adding Vertices to a Line in Three.js
Adding geometry to a three.js mesh after render
The solution is to use tricks, as quoted : modify your vertex coordinates, hide faces... this depends on what you want to do.
If your scene has not much vertices you can also remove the previous mesh and create a new one with a new geometry, without one vertex and with a corrected face array.
I am relatively new to three.js and am trying to position and manipulate a plane object to have the effect of laying over the surface of a sphere object (or any for that matter), so that the plane takes the form of the object surface. The intention is to be able to move the plane on the surface later on.
I position the plane in front of the sphere and index through the plane's vertices casting a ray towards the sphere to detect the intersection with the sphere. I then try to change the z position of said vertices, but it does not achieve the desired result. Can anyone give me some guidance on how to get this working, or indeed suggest another method?
This is how I attempt to change the vertices (with an offset of 1 to be visible 'on' the sphere surface);
planeMesh.geometry.vertices[vertexIndex].z = collisionResults[0].distance - 1;
Making sure to set the following before rendering;
planeMesh.geometry.verticesNeedUpdate = true;
planeMesh.geometry.normalsNeedUpdate = true;
I have a fiddle that shows where I am, here I cast my rays in z and I do not get intersections (collisions) with the sphere, and cannot change the plane in the manner I wish.
http://jsfiddle.net/stokewoggle/vuezL/
You can rotate the camera around the scene with the left and right arrows (in chrome anyway) to see the shape of the plane. I have made the sphere see through as I find it useful to see the plane better.
EDIT: Updated fiddle and corrected description mistake.
Sorry for the delay, but it took me a couple of days to figure this one out. The reason why the collisions were not working was because (like we had suspected) the planeMesh vertices are in local space, which is essentially the same as starting in the center of the sphere and not what you're expecting. At first, I thought a quick-fix would be to apply the worldMatrix like stemkoski did on his github three.js collision example I linked to, but that didn't end up working either because the plane itself is defined in x and y coordinates, up and down, left and right - but no z information (depth) is made locally when you create a flat 2D planeMesh.
What ended up working is manually setting the z component of each vertex of the plane. You had originaly wanted the plane to be at z = 201, so I just moved that code inside the loop that goes through each vertex and I manually set each vertex to z = 201; Now, all the ray start-positions were correct (globally) and having a ray direction of (0,0,-1) resulted in correct collisions.
var localVertex = planeMesh.geometry.vertices[vertexIndex].clone();
localVertex.z = 201;
One more thing was in order to make the plane-wrap absolutely perfect in shape, instead of using (0,0,-1) as each ray direction, I manually calculated each ray direction by subtracting each vertex from the sphere's center position location and normalizing the resulting vector. Now, the collisionResult intersection point will be even better.
var directionVector = new THREE.Vector3();
directionVector.subVectors(sphereMesh.position, localVertex);
directionVector.normalize();
var ray = new THREE.Raycaster(localVertex, directionVector);
Here is a working example:
http://jsfiddle.net/FLyaY/1/
As you can see, the planeMesh fits snugly on the sphere, kind of like a patch or a band-aid. :)
Hope this helps. Thanks for posting the question on three.js's github page - I wouldn't have seen it here. At first I thought it was a bug in THREE.Raycaster but in the end it was just user (mine) error. I learned a lot about collision code from working on this problem and I will be using it later down the line in my own 3D game projects. You can check out one of my games at: https://github.com/erichlof/SpacePong3D
Best of luck to you!
-Erich
Your ray start position is not good. Probably due to vertex coordinates being local to the plane. You start the raycast from inside the sphere so it never hits anything.
I changed the ray start position like this as a test and get 726 collisions:
var rayStart = new THREE.Vector3(0, 0, 500);
var ray = new THREE.Raycaster(rayStart, new THREE.Vector3(0, 0, -1));
Forked jsfiddle: http://jsfiddle.net/H5YSL/
I think you need to transform the vertex coordinates to world coordinates to get the position correctly. That should be easy to figure out from docs and examples.
I am using the Three.JS library to display a point cloud in a web brower. The point cloud is generated once at start up and no further points are added or removed. But it does need to be rotated, panned and zoomed. I've gone through the tutorial about creating particles in three.js here
Using the example I can create particles that are squares or use an image of a sphere to create a texture. The image is closer to what I want, but is it possible to generate the point clouds without using the image? The sphere geometry for example.
The problem with the image is that when you have thousands of points it seems they sometimes obscure each other around the edges. From what I can gather it seems like the black region in a point's png file blocks the image immediately behind the current point. (But it is transparent to points further behind)
This obscuring of the images is the reason I would like to generate the points using shapes. I have tried replacing particles = new THREE.Geometry() with THREE.SphereGeometry(radius, segments, rings) and tried to change the vertices to spheres.
So my question is. How do I modify the example code so that it renders spheres (or points) instead of squares? Also, is a particle system the most efficient system for my particular case or should I just generate the particles and set their individual positions? As I mentioned I only generate the points once, but then rotate, zoom, pan the points. (I used the TrackBall sample code to get the mouse events working).
Thanks for your help
I don't think rendering a point cloud with spheres is very efficient. You should be able to get away with a particle system and use a texture or a small canvas program to draw a circle.
One of the first three.js sample uses a canvas program, here are the important bits:
var PI2 = Math.PI * 2;
var program = function ( context )
{
context.beginPath();
context.arc( 0, 0, 1, 0, PI2, true );
context.closePath();
context.fill();
};
var particle = new THREE.Particle( new THREE.ParticleCanvasMaterial( {
color: Math.random() * 0x808008 + 0x808080,
program: program
} ) );
Feel free to adapt the code for the WebGL renderer.
Another clever solution I've seen in the examples is using an encoded webm video to store the data and pass that to a GLSL shader which is rendered through a particle system in three.js
If your point cloud comes from a Kinect, these resources might be useful:
DepthCam
KinectJS
When comparing my code to http://threejs.org/examples/#webgl_custom_attributes_particles3
I saw the only difference was:
vec4 outColor = texture2D( texture, gl_PointCoord );
if ( outColor.a < 0.5 ) discard;
gl_FragColor = outColor;
Added to the fragment shader, fixed this problem for me.
It wasn't z fighting because randomly, some corners would overlap distant particles.
material.alphaTest = 0.5 didn't work and turning off depth writes/tests messed up the viewing order.
The problem with the image is that when you have thousands of points
it seems they sometimes obscure each other around the edges. From what
I can gather it seems like the black region in a point's png file
blocks the image immediately behind the current point. (But it is
transparent to points further behind)
You can get rid of the transparency overlapping problem of the underlying square structure by turning
depthTest:false
The problem then is, if you are adding additional objects to the scene the depth-testing will fail and the PointCloud will be rendered in front of the other objects, ignoring the actual order. To get around that you can additionally turn off
depthWrite:false
In my project the shapes I created were spheres and I used an image as texture for material...
How can I make a custom shape (not sphere, rectangle, etc.)? For example, how can I create a halfsphere?
My code for now:
// create a texture
texture = THREE.ImageUtils.loadTexture('red.png');
// create a sphere shape
geometry = new THREE.SphereGeometry(50, 16, 16);
// give it a shape red color
material = new THREE.MeshLambertMaterial({map: texture});
// create an object
mesh = new THREE.Mesh( geometry, material);
There are multiple ways to use geometry in Three.js, from exporting models via a 3D editor (like Blender, for which a nice Three.js exporter already exists), to creating geometry from scratch.
One way would by to create instance of THREE.Geometry and add vertices, and then work out how those connect to add face indices, but this not an easy way to do it.
I would suggest starting with the existing geometries (found in the extras/geometries package) like THREE.CubeGeometry, THREE.CylinderGeometry, THREE.IcosahedronGeometry, THREE.OctahedronGeometry, etc.)
Additionally there are some really nice classes that allow you to generate extrusions (THREE.ExtrudeGeometry) and lathes(THREE.LatheGeometry). For extrusions, see this example.
You mentioned creating half a sphere. That's an ideal candidate for using LatheGeometry.
All you need to do is create a half-circle path (as an array of Vector3 instances) and pass that to the lathe so it revolves the half-circle into 3D - a halfsphere.
Here's an example:
var pts = [];//points array - the path profile points will be stored here
var detail = .1;//half-circle detail - how many angle increments will be used to generate points
var radius = 200;//radius for half_sphere
for(var angle = 0.0; angle < Math.PI ; angle+= detail)//loop from 0.0 radians to PI (0 - 180 degrees)
pts.push(new THREE.Vector3(Math.cos(angle) * radius,0,Math.sin(angle) * radius));//angle/radius to x,z
geometry = new THREE.LatheGeometry( pts, 12 );//create the lathe with 12 radial repetitions of the profile
Plug that geometry into your mesh and Bob’s your uncle!
Optionally, you can centre the mesh/pivot using GeometryUtils:
THREE.GeometryUtils.center(geometry);