Three JS Cylinder Texture Distortion - javascript

Applying a texture to a cylinder geometry in THREE JS causes some weird distortion of the texture as can be seen in this image:
The shape is created like so:
var cylinderGeo = new THREE.CylinderGeometry(0.1, 1, 1, 4, 1, false, Math.PI / 4);
cylinderGeo.computeFlatVertexNormals();
var mesh = new THREE.Mesh(cylinderGeo);
mesh.position.x = 10;
mesh.scale.set(10, 5, 10);
mesh.material = new THREE.MeshLambertMaterial();
// LOAD TEXTURE:
textureLoader.load("/textures/" + src + ".png", function (texture) {
texture.wrapS = THREE.RepeatWrapping;
texture.wrapT = THREE.RepeatWrapping;
texture.repeat.set(2,2);
texture.needsUpdate = true;
mesh.material.map = texture;
mesh.material.needsUpdate = true;
});
The texture seems to be applied per polygon and not per face? How can i make it wrap around the cylinder without any such artifacts?
EDIT: The texture is 256x256

Each side of your pyramid is composed of two triangles. You can see this quite clearly in the example on the CylinderGeometry documentation page.
Each triangle has UVs which are created based on the assumption that both triangles will be the same scale, like they are in the example. By making one end of your cylinder smaller, you're changing the scale of the triangles, but your UVs are remaining the same.
You can either edit the UVs to make up for the difference, or (and I recommend) create your own geometry with proper UVs defined.

Related

Get 3D Position of shadow pixels in ThreeJS

I have the following project below created using ThreeJS. You will notice the gold object creates a shadow behind it on a sphere where I'm only rendering the backside so we can see the objects inside. I'm using a point light in the very center of the eye model to create the shadow evenly in all directions. This is the reason the shadow is curved.
I need to know how to get the 3D coordinates (x,y,z) of each pixel of this shadow that was created. For reference here is the code that creates the shadow with a lot removed for simplicity.
renderer.shadowMap.enabled = true;
renderer.shadowMap.type = THREE.BasicShadowMap//THREE.PCFSoftShadowMap;
const light = new THREE.PointLight( 0xffffff, 20, 0 );
light.position.set( 0, 0, 0 );
light.castShadow = true;
light.shadow.mapSize.width = 512;
light.shadow.camera.near = 0.5;
light.shadow.camera.far = 500;
scene.add( light );
const sphereGeometry = new THREE.SphereGeometry( 25, 32, 32 );
const sphereMaterial = new THREE.MeshStandardMaterial( { color: 0xffffff } );
sphereMaterial.side=THREE.BackSide;
const sphere = new THREE.Mesh( sphereGeometry, sphereMaterial );
sphere.castShadow = false;
sphere.receiveShadow = true;
scene.add( sphere );
I have researched some into this and I think it may be storing the shadow information in the matrix property of the model but this is not clear for sure in any documentation, so I'm not sure where to look to get this information. Any help is appreciated!
--- Extra not important info ---
Also, in case you are curious, the reason I need the shadow coordinates is because I will use those to raycast back into the eye and create a different kind of shadow on an azimuthal equidistant project on the right (it's complicated...), but just know that if I have the 3D coordinates of the shadow pixels I can do this :). I'm already doing it for the muscles of the eye for example.
You can't extract the shadow into a new geometry because this is all calculated in the GPU shaders upon rendertime, so JavaScript doesn't really have access to the shadowMap positions. However, there is a solution.
Assuming your point light is at (0, 0, 0), and it's at the center of the sphere, you could iterate through the vertices of the gold object and project these positions onto the sphere:
// Sphere radius
const radius = 25;
const vec3 = new THREE.Vector3();
// Get the vertex position array
const vertices = goldObject.geometry.getAttribute("position").array;
// Loop that iterates through all vertex positions
for (let i3 = 0; i3 < vertices.length; i3 += 3) {
// Set this vertex into our vec3
vec3.set(
vertices[i3 + 0], // x
vertices[i3 + 1], // y
vertices[i3 + 2] // z
);
// Set vector magnitude to 1
vec3.normalize();
// Set vector magnitude to radius of sphere
vec3.multiplyScalar(sphereRadius);
// Now you have the spherical projection of this vertex!
console.log(vec3);
}
Since the light source is the exact center of the sphere, you could take the position of each vertex of the gold object, normalize it, then multiply it by the radius of the sphere. Now that you have the vec3 on each iteration, you could add it to your own array to build your own THREE.BufferGeometry that's pushed against the sphere.
Of course, if you've translated or rotated the gold object, then that will affect the vertex positions, so you'd have to undo those translations, rotations, etc. when iterating through all the vertices.

Dynamically add and rotate a geometry in three.js

Refer https://jsfiddle.net/pmankar/svt0nhuv/
Main large red icosahedron geometry keeps rotating about the y axis. I added a small red sphere geometry and merged it to the main large red icosahedron geometry. Until here it works fine. For this I used, THREE.GeometryUtils.merge(point_sphere_iso_geom, sphere);
However, when I try to add spheres dynamically with a mouse click, they are added (yellow spheres), but they do not rotate with the main large red icosahedron geometry.
Can anyone explain why does it works in the initial case, but not when added dynamically and how to achieve it dynamically as well.
I hope I understood you correctly. Every mouse click you have to create a new geometry based on the previous one (mesh geometry and mesh matrix), merging it with the geometry of a new sphere, and apply it to a new mesh, then remove the old mesh and add the new one.
some changes in vars
var geometry, material, point_sphere_iso_geom, mesh;
in creation of the start merged mesh
point_sphere_iso_geom = new THREE.IcosahedronGeometry(100, 4);
cygeo = new THREE.SphereGeometry(5, 10, 10);
cygeo.translate(0,0,120);
point_sphere_iso_geom.merge( cygeo );
mesh = new THREE.Mesh(point_sphere_iso_geom, material);
and in the function of addYellowPoint
function addYellowPoint(valX, valY) {
var sgeometry = new THREE.SphereGeometry(2.5, 10, 10);
var range = 150;
var x = Math.random() * range - range / 2;
var y = Math.random() * range - range / 2;
var z = Math.random() * range - range / 2;
sgeometry.translate(x,y,z);
point_sphere_iso_geom = mesh.geometry.clone();
point_sphere_iso_geom.applyMatrix(mesh.matrix);
point_sphere_iso_geom.merge(sgeometry);
scene.remove(mesh);
mesh.geometry.dispose();
mesh.material.dispose();
mesh = new THREE.Mesh(point_sphere_iso_geom, material);
scene.add(mesh);
}

Threejs PlaneGeometry doesn't receive shadows or reflections

I'm pretty new to 3d and to threejs and I can't figure out how I can get a PlaneGeometry to show individually illuminated polygons i.e. receive shadows or show reflection. What I basically do is taking a PlaneGeometry applying some noise to every z value of the vertices. Then I have a simple directional light in my scene which is supposed to make the emerging noise pattern on the plane visible. I tried different things like plane.castShadow = true or renderer.shadowMapEnabled = true without success. Am I just missing a simple option or is this way more complicated than I think?
Here's are the relevant pieces of my code
renderer.setSize(width, height);
renderer.setClearColor(0x111111, 1);
...
var directionalLight = new THREE.DirectionalLight( 0xffffff, 0.9);
directionalLight.position.set(10, 2, 20);
directionalLight.castShadow = true;
directionalLight.shadowCameraVisible = true;
scene.add( directionalLight );
var geometry = new THREE.PlaneGeometry(20, 20, segments, segments);
var index = 0;
for(var i=0; i < segments + 1; i++) {
for(var j=0; j < segments + 1; j++) {
zOffset = simplex.noise2D(i * xNoiseScale, j * yNoiseScale) * 5;
geometry.vertices[index].z = zOffset;
index++;
}
}
var material = new THREE.MeshLambertMaterial({
side: THREE.DoubleSide,
color: 0xf50066
});
var plane = new THREE.Mesh(geometry, material);
plane.rotation.x = -Math.PI / 2.35;
plane.castShadow = true;
plane.receiveShadow = true;
scene.add(plane);
This is the output I get. Obviously the plane is aware of the light because the bottom side is darker than the upper side but there is no sign of any individual polygons receiving individual lightening and no 3d structure is visible. Interestingly when I put in a different geometry like a BoxGeometry individual polygons are illuminated individually (see 2nd image). Any ideas?
Ok I figured it out thanks to this post. The trick is to use the THREE.FlatShading shader on the material. Important to note is that after every update of the vertices two things need to be done. Before rendering geometry.normalsNeedUpdate must be set to true so the renderer also incorporates the newly oriented vertices. Also geometry.computeFaceNormals() needs to be called before rendering because when you alter the vertices the normals are not the same anymore.

Three.js point light not working with large meshes

I have the following problem. When I use Three.js point light like this:
var color = 0xffffff;
var intensity = 0.5;
var distance = 200;
position_x = 0;
position_y = 0;
position_z = 0;
light = new THREE.PointLight(color, intensity, distance);
light.position.set(position_x, position_y, position_z);
scene.add(light);
It works as expected when there is a "small" object (mesh) positioned close to the light on the scene. However, when there is a large object (let us say a floor):
var floorTexture = new THREE.ImageUtils.loadTexture( 'floor.jpg' );
floorTexture.wrapS = floorTexture.wrapT = THREE.RepeatWrapping;
floorTexture.repeat.set( 1, 1);
var floorMaterial = new THREE.MeshBasicMaterial( { map: floorTexture, side: THREE.DoubleSide } );
var floorGeometry = new THREE.PlaneGeometry(1000, 1000, 10, 10);
var floor = new THREE.Mesh(floorGeometry, floorMaterial);
floor.position.y = -0.5;
floor.rotation.x = Math.PI / 2;
scene.add(floor);
Then the light will not be shown on it. At first I thought it is due to the fact that the floor center is positioned further away from the point light so the point light cannot reach it with the distance set to 200 (even though part of the floor is closer than the mentioned distance). Therefore I have tryied to increase this distance - no luck.
There is a workaround to create a floor out of small parts. Then the point light again works as expected but there is a problem with this approach - namely it drastically decreases FPS due to the large number of "floor objects" to be rendered.
My guess is that I am missing something. I know that there are other types of light which cover the whole scene but I am trying to create a lamp, so I think I need to use a point light. But I might be wrong. Any help or hint how to make this work would be appreciated.
MeshBasicMaterial does not support lights. Use MeshPhongMaterial.
MeshLambertMaterial also supports lights, but it is not advisable in your case for reasons explained here: Three.js: What Is The Exact Difference Between Lambert and Phong?.
three.js r.66

Threejs, making glass shatter effect

I have an idea to create, but since my knowledge about Three.js and 3D programming in general is limited, I am stuck...
Idea: User is doing some things. At some point - whole front screen crashes, and reveals something different behind.
At the beginning, my idea was - make screenshot, of what is happening right now, put that image in front of everything (still, having difficulties making that plane to take 100% size - idea - when it shows, user cannot tell the difference, between old 3d renderings, and new 2d picture), and then - shatter it. So - by looking around the Web for different examples, I made some... thing - Created screenshot, made plane and applied screenshot as a texture to it. To create shattering effect, I used TessellateModifier to divide that plane, and ExplodeModifier to create each face as separated face.
Here will be the code I made so far.
function drawSS()
{
var img = new Image;
img.onload = function() { // When screenshot is ready
var canvas = document.createElement('canvas');
canvas.width = window.innerWidth;
canvas.height = window.innerHeight;
var context = canvas.getContext('2d');
// CANVAS DRAWINGS
// Draws screenshot
// END
var texture = new THREE.Texture(canvas);
texture.needsUpdate = true;
var multiMaterial = [
new THREE.MeshBasicMaterial({map: texture, side: THREE.FrontSide}), // canvas drawings
new THREE.MeshBasicMaterial( { color: 0xffffff, wireframe: true, transparent: true}) // for displaying wireframe
];
var geometry = new THREE.PlaneGeometry(canvas.width, canvas.height); // create plane
for (var i = 0, len = geometry.faces.length; i < len; i++) { // Snippet from some stackoverflow post, that works.
var face = geometry.faces[i].clone();
face.materialIndex = 1;
geometry.faces.push(face);
geometry.faceVertexUvs[0].push(geometry.faceVertexUvs[0][i].slice(0));
}
geometry.dynamic = true;
THREE.GeometryUtils.center( geometry ); // ?
var tessellateModifier = new THREE.TessellateModifier( 10 );
for ( var i = 0; i < 5; i ++ )
tessellateModifier.modify( geometry );
new THREE.ExplodeModifier().modify( geometry );
geometry.vertices[0].x -=300;
geometry.vertices[1].x -=300;
geometry.vertices[2].x -=300;
var mesh = new THREE.Mesh( geometry, new THREE.MeshFaceMaterial(multiMaterial) );
scene.add( mesh );
};
// THEN, set the src
img.src = THREEx.Screenshot.toDataURL(renderer);
}
For now - I moved 1 face, by changing coordinates of 3 vertices. I'm asking - if this approach is the way to go? Result looks like this. (white lines - wireframe, black lines - (drawings in canvas) my desired wireframes - problem for later). Note - by moving this way - texture goes along, I don't know, if I make new triangles using these vertices, how would I set texture.

Categories