I'm implementing a little 3D maze game, rendering the walls as Meshes affected by a PointLight. Some objects will be implementes as Sprites, but I'm having an issue....the objects appear fully iluminated, a behavior like MeshBasicMaterial.
It's possible to have these Sprites affected by the light just like Lambert or Phong materials?
Instead of using a Sprite, why not use a PlaneGeometry with a MeshLambertMaterial, and keep the rotation of the plane in sync with the rotation of the camera so that the plane always faces the camera just as a Sprite does? See the related question Three.js - billboard effect, maintain orientation after camera pans for further details.
Having Sprites respond to lights is currently not supported. They are affected by Fog, however.
three.js r.62
Related
My scene contains a number of planes with PNG textures. First, my problem was the transparent parts of the texture hiding other objects behind it. Then I found the workaround by disabling depth testing and depth buffer writing, which however introduced another problem: at certain distances and angles, objects that are behind, render over those that are in front of them:
Now, I'm a relative newcomer to 3D programming, but through reading other SO answers, such as:
Three.js / WebGL - transparent planes hiding other planes behind them
Transparent textures behaviour in WebGL
Three.js - depthWrite vs depthTest for transparent canvas texture map on THREE.Points
I think I understand that depth and transparency are a difficult issue in general. However, I am not sure what the right direction from here is. Should I manually calculate the distance of the objects from camera, then tell three.js to render them from farthest to closest? Would that fix the issue? Or is there another general solution?
Say I have a nice spaceship rendered in my scene. I then zoom out and the ship becomes smaller. This is all quite simple to achieve. Now at some point I want to replace the spaceship object with a simple triangle representing the ship. At this stage the icon for the ship should be rendered instead of the ship. Obviously the triangle should move based on the ship's movement and my camera movements. The triangle should not change orientation, so even if I rotate the camera or have the ship roll the triangle should stay the same orientation. So on a canvas this would be very simple to do. I would just take the x and y components of the 3d object and draw my triangle at the coordinates. My problem is that I do not know how to draw on the WebGL canvas directly? Is it possible? If not do anybody have any pointers to strategies to get this done? I would be happy if I could get nudged in the right direction :) Thanks in advance.
Update : What I eventually decided to do was to use the orthographic camera overlay approach suggested below and in a couple of other places.
Use Sprite object for icon. Then place icon right between spaceship and camera in a specific distance from the camera (using Raycaster or just calculation).
Use of orthographic overlay as mentioned by #WestLangley is also possible. Since you would have to calculate ship's position relative to your canvas in this case, maybe you could even create pure HTML overlay using DIV and place your icon in it as IMG object.
I'm trying to build a camera calibration function for a three.js app and could very much do with some help.
I have a geometry in a 3d scene, lets say for this example it is a cube. As this object and scene exist in 3d, all properties of the cube are known.
I also have a physical copy of the geometry.
What I would like to is take a photo of the physical geometry and mark 4+ points on the image in x and y. These points correspond to 4+ points in the 3d geometry.
Using these point correlations, I would like to be able to work out the orientation of the camera to the geometry in the photo and then match a virtual camera to the 3d geometry in the three.js scene.
I looked into the possibility of using an AR lib such as JS-aruco or JSARToolkit. But these systems need a marker, where as my system need to be marker less. The user will choose the 4 (or more) points on the image.
I've been doing some research and identified that Tsai's algorithm for camera alignment should suit my needs.
While I have a good knowledge of javascript and three.js, My linear algebra isn't the best and hence am having a problem translating the algorithm into javascript.
If anyone could give me some pointers, or is able to explain the process of Tsai's algorithm in a javascript mannor I would be so super ultra thankful.
I was wondering if there's a way to project a shadow without have a "ground" plane where project it.
I'd like to do that because the camera can be moved around the object and would be ugly see it pass through the ground.
I'm using Three.js latest-version with the WebGl renderer.
Yes this is possible by applying ShadowMaterial to the plane geometry. This material can receive shadows and is completely transparent. so you just position the plane geometry at the desired location in the scene and you are good to go. check out this. https://threejs.org/docs/#api/en/materials/ShadowMaterial
This is technically impossible, you could write a shader that renders the shadow on a transparent plane, that way you would not notice it when the "camera" goes through the plane, only when it goes through the shadow itself.
To do so you can lerp between the shadowratio and a transparent black or white in the pixel shader and then set the corresponding blending states on the rendering context.
Is there a way to achieve high-performance motion-blur effect in WebGL?
I'm using Three.js, and the scene is a few simple plane objects with different textures. I move the camera in x axis.
This example does a post processing pass of motionblur:
http://mrdoob.github.com/three.js/examples/webgl_materials_cubemap_dynamic.html