I have converted a mesh from .obj to json using the official converter (https://github.com/mrdoob/three.js/tree/master/utils/converters/obj). The mesh is displayed correctly from one side, but not from the other. I believe the issue is that it only has one side, basically a plane, and therefore all faces and their normals pointing in one direction.
How could I fix this?
Edit: I could add the mesh twice, and one time using material.side = THREE.BackSide;, but loading it twice doesn't seem like the best option to me.
Only front faces of triangle primitives are rendered by default when using WebGLRenderer. You can override that by setting:
material.side = THREE.DoubleSide; // or THREE.BackSide
The front face is determined by the winding order of the vertices.
If the triangle vertices are specified in counter-clockwise order (CCW), the front face will face towards you; if specified in clockwise order (CW), the front face will face away from you.
This behavior can be changed by the WebGLRenderer.setFaceCulling() method, but it is not likely you would find that necessary.
three.js r.75
Related
I am trying to get back and export the mesh that is being displaced by a displacementMap.
The shader is transforming vertexes according to this line (from
three.js/src/renderers/shaders/ShaderChunk/displacementmap_vertex.glsl):
transformed += normalize( objectNormal ) * ( texture2D( displacementMap, uv ).x * displacementScale + displacementBias );
This is displacing a vertex according to the displacementMap, mixed with the uv coordinates for that vertex.
I am trying to create this mesh/geometry so that I can then later export it.
I have created a "demo" of the problem here:
Github Page
I would like the displaced mesh, as seen in the viewport, up on pressing exportSTL. However I am only getting the undisplaced plane.
I understand why this happens, the displacement only happens in the shader and is not really displacing the geometry of the plane directly.
I have not found a method provided by three.js and so far have not found any way in getting the changes from the shader.
So I am trying to do it with a function in the "demo.js".
However, I am a WebGL/three.js newbie and have problems re-creating what the shader does.
I have found exporters handling morphTargets, but these are of no help.
After reading this question I tried PlaneBufferGeometry, as this is closer to the shader - but this produces the same results for me.
I think this question originally tried to produce something similar, but accepted an unrelated question.
In the end I would like to draw on a HTML-canvas which then updates the texture in real time (I have this part working). The user can then export the mesh for 3d printing.
Is there a way three.js can give me the modified geometry of the shader?
Or can someone help me translate the shader line in to a "conventional" Three.js function?
Maybe this is totally the wrong approach to get a displaced mesh?
Update - Example is working
Thanks to the example from DeeFisher I can now calculate the displacement in CPU, as originally suggested by imerso.
If you click on the Github Page now, you will get a working example.
At the moment I do not fully understand why I have to mirror the canvas to get the correct displacement in the end, but this is at worst a minor nuissance.
To do that while still using a shader for the displacement, you will need to switch to WebGL2 and use Transform-Feedback (Google search: WebGL2 Transform-Feedback).
An alternative would be to read the texture back to CPU, and scan it while displacing the vertices using CPU only (Google search: WebGL readPixels).
Both alternatives will require some effort, so no code sample at this time. =)
BABYLON.js can be used in conjunction with THREE.js and it allows you to displace the actual mesh vertices when applying displacement maps:
var sphere = BABYLON.Mesh.CreateSphere("Sphere", 64, 10, scene, true);
sphere.applyDisplacementMap(url, minHeight, maxHeight, onSuccess, uvOffset, uvScale)
See an example of the function in use here.
You can then use a for to loop transfer the BABYLON mesh data into a THREE mesh object.
I'm trying to load with three.js the same image in a large number (~ 1000) of bidimensional shapes but with different offsets in every shape.
I've taken this demo from the official website and customized it into this other demo, with all my shapes and a random background texture.
The problem is that if I clone the texture once per shape the page eats a lot of RAM and it ends up crashing.
You can see this in action by going in the javascript and changing the comments in the addShape function (you'll find the instructions in the code).
I've done some research and found some results, like this open issue or this older question where it's recommended to clone the texture; anyway nothing seems to work in my example.
Am I doing something wrong? It's changed something since these last posts about this problem?
Maybe I´m misunderstanding the problem, but why don´t you change the UV coordinates of the individual shapes to align the texture and use just one texture?
From documentation:
Geometry.faceVertexUvs
Array of face UV layers, used for mapping textures onto the geometry.
Each UV layer is an array of UVs matching the order and number of
vertices in faces.
To signal an update in this array, Geometry.uvsNeedUpdate needs to be
set to true.
I'm having trouble to correctly apply a texture on object.
As you can see from this picture: http://i.stack.imgur.com/4WpP4.png the texture is repeated and not applied continuously across whole the front face of the object.
Here: http://goo.gl/Dx6hDI you can find the code and a live example.
someone can help me ?
Your object does not have correct UV coordinates. Load the object into a 3d editor, apply the coordinates you want, then export new version.
I'm having hard times figuring out what's the best way to check if a Object3d is visible for the eyes of the camera.
I'm having a sphere in the middle of the screen. Some cubes are added on it's surface randomly. What I would need is a way to check which cubes are visible (on the front half of the sphere) and which one are invisible (on the back half of the sphere) for the eyes of the camera.
What I have found so far seems to be the right direction - but I must be missing something with the THREE.Raytracer class.
Here is a fiddle of the code that I'm using: jsfiddle. I have tried to make it as clear as possible.
This part of the fiddle might contain the buggy code:
var raycaster = new THREE.Raycaster();
var origin = camera.position, direction, intersects, rayGeometry = new THREE.Geometry(), g;
pointGroup.children.forEach(function(pointMesh) {
direction = pointMesh.position.clone();
// I THINK THIS CALCULATION MIGHT BE WRONG - BUT DON'T KNOW HOW TO CORRECT IT
raycaster.set(origin, direction.sub(origin).normalize());
// if the pointMesh's position is on the back half of the globe, the ray should intersect with globe first and the hit the point as second target - because the cube is hidden behind the bigger sphere object
intersects = raycaster.intersectObject(pointMesh);
// this is always empty - should contain objects that are located on the back of the sphere ...
console.log(intersects);
});
Frustum Culling is not working as outlined in this stack overflow question here: post1
Also this post2 and this post3 are explaining the topic really good but not quite for this situation.
Thank you for you help!
You want to look at Occlusion Culling techniques. Frustum culling works fine and is not what you are describing. Frustum culling just checks if an object (or its bounding box) is inside the camera pyramid. You perform Occlusion culling in addition to Frustum Culling specially when you want to eliminate objects which are occluded by other objects inside the view frustum. But it is not an easy task.
I just worked though a similar problem where I was trying to detect when a point in world space passed out of view of the camera and behind specific objects in the scene. I created a jsfiddle, (see below) for it. When the red "target" passes behind any of the three "walls" a blue line is drawn from the "target" to the camera. I hope this helps.
Can anybody help me with three.js?
I need to draw background, something, like a THREE.Sprite, but it neet to be UNDER any 3d object, that will draw later. I have a camera, that can be move only on Z axis.
I tryed to use:
cube mapping shader - PROBLEM: artefacts with shadow planes, it's unstable draw
THREE.Sprite that dublicate camera moving - PROBLEM: artefacts with shadow plane - it have a edge highlighting OR drawing only other spirtes without objects.
HTML DOM Background - PROBLEM: big and ugly aliasing in models.
What can I try more? Thanks!
You could maybe try drawing in several passes, i.e. making a first render of the background scene to a buffer, and then a second one over the first "buffer". Maybe using the buffer as background (painting it in 2D with an orthographic projection, and disabling depth buffer writes in that pass).
I haven't tried it myself with three.js, but that's how I'd do that with "traditional" OpenGL.
If you want a "3d" background i.e. something that will follow the rotation of your camera, but not react to the movement (be infinitely far), then the only way to do it is with a cubemap.
The other solution is a environment dome - a fully 3d object.
If you want a static background, then you should be able todo just a html background, i'm not sure why this would fail and what 'aliasing in models' you are talking about.