I am very stuck, I do not understand why my objects are disappearing with canvas renderer. While it works exactly as expected with webGL renderer, I need to display this on mobile devices and as such do not have access to a webGL renderer
I have tried overdraw:true but this does not seem to make the missing objects disappear
http://jsfiddle.net/xH9GD/3/
When I comment out the room the boxes remain but they get very mangled on my iPhone.
I understand the concept of Z-fighting however I don't think this is occurring as the zPosition of each of the faces should be seperate to the others
floor = drawTopFacingWall(room.width, room.length );
wall1 = drawLeftFacingWall( room.length, room.depth );
wall2 = drawFrontFacingWall( room.width, room.depth );
wall3 = drawRightFacingWall( room.length, room.depth );
roof = drawBottomFacingWall( room.width, room.length );
wall4 = drawBackFacingWall( room.width, room.depth );
The "disappearing" geometry is caused by a limitation of CanvasRenderer due to the way it handles depth-sorting.
While WebGLRenderer sorts at the pixel level, CanvasRenderer sorts at the polygon level.
The best you can do is to increase the tessellation of your geometry.
var geometry = new THREE.PlaneGeometry( width, height, 10, 10 );
three.js r.66
Related
I've got a strange issue regarding the automatic resizing of textures by WebGLRenderer using threejs.
I know that WebGL requires the sizes of textures* to be the power of 2.
*-the textures that are using non-LinearFilter, or have wrapping not as a clamp set
My texture has wrap set as RepeatWrapping and the sie of the texture is
65536 x 512 so this is 2^16 x 2^9
I'm assuming that the size of the texture is correct. However, the console says:
THREE.WebGLRenderer: Texture has been resized from (65536x512) to (16384x128)
It's very bad that the texture is downsized because it's very visible on the quality of rendered texture.
I don't really know what I'm doing wrong. According to the documentation, everything is set correctly.
Is there a possibility to prevent downsizing?
I don't think that's helpful but I'm also including the code of loading textures
const texture = new TextureLoader().load(path);
texture.anisotropy = 2;
texture.magFilter = LinearFilter;
texture.minFilter = LinearFilter;
texture.wrapS = RepeatWrapping;
texture.wrapT = RepeatWrapping;
texture.repeat.set(1 / tilesAmountHorizontally, 1 / tilesAmountVertically);
So you can find out what the maximum size of the device supports
var gl = document.getElementById( "my-canvas").getContext(
"experimental-webgl" ); alert(gl.getParameter(gl.MAX_TEXTURE_SIZE))
the texture can be divided and applied in parts
I am working in a project in Three.js and I need to have multiple images floating into a 3D space. So I started to simply use these images as textures on planes. However, images have a different height and width so I am just wondering if there is a way to make the plane adapt to the size of the textures. Or be proportional with it.
There's maybe a simple way to do it but I didn't find anything. May one of you can help me, or tell me to stop looking for it ?
When loading a texture, you can check its size and THEN create the plane to host it with the right width/height ratio:
var loader = new THREE.TextureLoader();
var texture = loader.load( "./img.png", function ( tex ) {
console.log( tex.image.width, tex.image.height );
// here you can create a plane based on width/height image linear proportion
} );
In Three.js, I have a 3d object where I am using local clipping planes to only render a part of the object.
However, since 3d objects are "hollow" (meaning only the outer surface is rendered), when we clip anything off that surface we can "see into" the object. Here's an example of what I mean, clipping a corner off a cube. Notice how we can see the backside of the opposite corner.
I would like to give the appearance of the object being solid. Based on this issue, it seems that the best way to accomplish this is to create a surface over the clipped region, thus capping the hole and making the object appear like it isn't hollow.
My question is, how do I know where to build this surface? Does Three.js provide a way to get a list of vertices that intersect between a plane and any arbitrary surface? If not, how might I approach this problem myself?
I found this question, but the author didn't describe how they solved the problem I am having here.
You want to render a clipped surface as if it were a solid -- i.e., not hollow.
You can achieve that effect with MeshPhongMaterial -- or any three.js material for that matter -- with a simple hack to the material shader.
material.onBeforeCompile = function( shader ) {
shader.fragmentShader = shader.fragmentShader.replace(
'#include <output_fragment>',
`
vec3 backfaceColor = vec3( 0.4, 0.4, 0.4 );
gl_FragColor = ( gl_FrontFacing ) ? vec4( outgoingLight, diffuseColor.a ) : vec4( backfaceColor, opacity );
`
)
};
This should look pretty good. It will require material.side = THREE.DoubleSide;
Alternatively, see https://threejs.org/examples/webgl_clipping_stencil.html.
three.js r.148
I made a THREE.SectionHelper class which could be interesting if you want to set a different material/color for the inside of the mesh that you are clipping. Check a demo in this fiddle.
var sectionHelper = new THREE.SectionHelper( mesh, 0xffffff );
scene.add(sectionHelper);
I have been developing an app with three.js but I have encountered this problem and I cannot seem to find any solution to it.
I want to determine which meshes are visible right now according to where the camera is currently aiming, so i can refresh my objects (data is coming from a service) or not depending on if they are being shown on the viewport.
I'm using THREE.js in CANVAS mode (I have found a solution using WebGL that says if objects are rendered or not, but i need CANVAS for this project).
I have been trying to find if three.js sets somehow a property to indicate whether the object is visible or not (currently on the screen, not on the entire 3D world), but I can't find it. Meshes have a visible: property but it's always on true even if the camera is not aiming to that object.
This is the code you're after:
var frustum = new THREE.Frustum();
var cameraViewProjectionMatrix = new THREE.Matrix4();
// every time the camera or objects change position (or every frame)
camera.updateMatrixWorld(); // make sure the camera matrix is updated
camera.matrixWorldInverse.getInverse( camera.matrixWorld );
cameraViewProjectionMatrix.multiplyMatrices( camera.projectionMatrix, camera.matrixWorldInverse );
frustum.setFromMatrix( cameraViewProjectionMatrix );
// frustum is now ready to check all the objects you need
console.log( frustum.intersectsObject( object ) );
I am developing volume rendering app in webgl, and the last thing i have to do is to create a lighting model. I have everything prepared, but i am not able to get the light position in a scene. I am adding the light this way:
var light = new THREE.DirectionalLight(0xFFFFFF, 1);
light.position.set(0.5, 0.5, 0.1).normalize();
camera.add(light);
I attach light to camera, because i need to have light static if i am moving with camera.
Problem is, that i am using ShaderMaterial (custom shader). I am not able to find any uniform variables, that represent light position. I have searched, that i should set:
material.lights = true;
but it caused errors.
Uncaught TypeError: Cannot set property 'value' of undefined
I have tried to add constant vector in vertex shader, but i need to multiply by inverse view matrix (if i am right). But GLSL 1.0 doesnt support inverse function. I have idea to send inverse view matrix to shader by uniform, but i dont know where can i get view matrix of scene in JS.
Thanks for help. I have tried everything :( ...
Bye.
If you are going to add the light as a child of the camera and set material.lights = true, then you must add the camera as a child of the scene.
scene.add( camera );
three.js r.57
if you're trying to project the light coordinates from model space to screen space using the camera, here's a function (courtesy of Thibaut Despoulain) that might help -
var projectOnScreen = function(object, camera)
{
var mat = new THREE.Matrix4();
mat.multiplyMatrices( camera.matrixWorldInverse, object.matrixWorld);
mat.multiplyMatrices( camera.projectionMatrix , mat);
var c = mat.n44;
var lPos = new THREE.Vector3(mat.n14/c, mat.n24/c, mat.n34/c);
lPos.multiplyScalar(0.5);
lPos.addScalar(0.5);
return lPos;
}