Is it possible to dynamically switch antialiasing on/off with three.js? I tried the following which doesn't work at all:
`
var rendererAA = new THREE.WebGLRenderer({{ antialias: true }});
rendererAA.setSize( window.innerWidth, window.innerHeight );
document.body.appendChild( rendererAA.domElement );
var rendererNA = new THREE.WebGLRenderer({{ antialias: false }});
rendererNA.setSize( window.innerWidth, window.innerHeight );
rendererNA.domElement = rendererAA.domElement;
var renderer = rendererAA;
function render() {
if (somePredicate) {
renderer = rendererAA;
}else {
renderer = rendererNA;
}
requestAnimationFrame( render );
renderer.render( scene, camera );
}`
If the active renderer is rendererNA (somePredicate = false), the scene just freezes and no change in antialiasing.
I also tried to set the inactive renderers domElement to null, which didn't help. I was inspired by this question:
Dynamically turn on/off antialiasing and shadows in WebGLRenderer
But there was no definite answer.
This is not possible currently with Three.js.
You need to recreate the entire context to allow for AA to be on and off and cannot just switch between this. I suggest you have boolean vars that determine if AA is on and off prior to execution client side. E.h. HD/SD, if HD boolean AA = true, use WebGLRenderer to allow for AA and reverse. But you cannot dynamically change this at runtime.
Related
I'm a beginner to three.js.I'm trying to build something similar to this https://virtualshowroom.nissan.in/car-selected.html?selectedCar=ext360_deep_blue_pearl. I built everything using three.js, but I'm not able to figure out how to create a hotspot(like the red dot in the above link) and show pop up when you click on it. below is my project code, let me know if anything else is required.
<html>
<head>
<title>My first three.js app</title>
<style>
body { margin: 0; }
canvas { display: block; }
</style>
</head>
<body>
<h1></h1>
<script src="./three.js"></script>
<script type="module">
import { GLTFLoader } from 'https://threejs.org/examples/jsm/loaders/GLTFLoader.js';
import { OrbitControls } from 'https://threejs.org/examples/jsm/controls/OrbitControls.js';
var renderer,scene,camera;
scene = new THREE.Scene();
scene.background = new THREE.Color(0xfff6e6)
camera = new THREE.PerspectiveCamera( 75, window.innerWidth / window.innerHeight, 0.1, 1000 );
renderer = new THREE.WebGLRenderer();
renderer.setSize( window.innerWidth, window.innerHeight );
document.body.appendChild( renderer.domElement );
var loader = new GLTFLoader();
var hlight = new THREE.AmbientLight(0x404040, 100)
scene.add(hlight)
var directionalLight = new THREE.DirectionalLight(0xffffff, 100)
directionalLight.position.set(0,1,0)
directionalLight.castShadow = true
scene.add(directionalLight)
var light = new THREE.PointLight(0xffffff, 10)
light.position.set(0, 300, 500)
scene.add(light)
var light2 = new THREE.PointLight(0xffffff, 10)
light2.position.set(500, 100, 0)
scene.add(light2)
var light3 = new THREE.PointLight(0xffffff, 10)
light3.position.set(0, 100, -500)
scene.add(light3)
var light4 = new THREE.PointLight(0xffffff, 10)
light4.position.set(-5000, 300, 0)
scene.add(light4)
var controls = new OrbitControls(camera, renderer.domElement);
document.body.appendChild(renderer.domElement)
var loader = new GLTFLoader();
loader.load( './scene.gltf', function ( gltf )
{
scene.add( gltf.scene );
}, undefined, function ( error ) { console.error( error ); } );
// load a image resource
camera.position.z = 5;
var animate = function () {
requestAnimationFrame( animate );
renderer.render( scene, camera );
};
animate();
</script>
</body>
</html>
Those “hotspots” as you call them are Annotations where the annotation content is basically pure HTML.
The tutorial in the link is probably the best step-by-step readiness you can follow to learn how to do it in your scene.
I can give a walkthrough on the steps required to get the desired effect since I have done it a few times myself.
define a 3d point in your scene where the hotspot should be. You can optionally nest this in a an other Object3D to make sure it scales, moves and rotates with the model / parent.
Add a plane to this point load a image texture to this plane. and there you have your visible hotspot
update the hotspots to make sure they are always looking at the camera by using the lookAt function.
when the user clicks the screen cast a raycast against all the hotspots you have in your scene. Easiest way to do this is by storing all your hotspots in an array.
When the raycast hits a hotspot get the location either of the hitpoint or the hotspots location. Transform that to screen coordinates. Search on stackoverflow how to do this. I am sure there is a post about this.
Final step display your html on the correct location you obtained from the previous step.
The advantage of this method is that the hotspot will integrate nicely with the model in your scene. Since html based hotspots will always be on top of the scene.
That is about all that is to it. Let me know if you need any further clarification!
I would like to add a mask effect on my scene.
I found this cool jsfiddle : https://jsfiddle.net/f2Lommf5/4703/
I've been wondering if it's possible to set the white part of that texture transparent so I can have my background object cropped depending on the above plane texture.
I tried to play with the alphaTest value but in vain.
Does anyone have any idea on how to reach this result ? Thank you
I'm not 100% sure I understand your intended result but it should be possible to implement the effect via post processing. In the following live demo, MaskPass is used to create a mask where no pixels of the actually beauty pass are rendered. The important code section is:
var clearPass = new ClearPass();
var maskPass = new MaskPass( sceneMask, camera );
maskPass.inverse = true;
var renderPass = new RenderPass( scene, camera );
renderPass.clear = false;
var clearMaskPass = new ClearMaskPass();
var outputPass = new ShaderPass( CopyShader );
var parameters = {
minFilter: THREE.LinearFilter,
magFilter: THREE.LinearFilter,
format: THREE.RGBFormat,
stencilBuffer: true
};
var renderTarget = new THREE.WebGLRenderTarget( window.innerWidth, window.innerHeight, parameters );
composer = new EffectComposer( renderer, renderTarget );
composer.addPass( clearPass );
composer.addPass( maskPass );
composer.addPass( renderPass );
composer.addPass( clearMaskPass );
composer.addPass( outputPass );
Notice that the mask object (the plane) is managed in a separate scene.
Live demo: https://jsfiddle.net/e6p0axb1/5/
I have a three.js project with 3d-models, a ground and a grid in it.
The 3d-models getting outlined with outlinePass (https://threejs.org/examples/?q=outl#webgl_postprocessing_outline).
I am able to move the objects with Transformcontrol (https://threejs.org/examples/?q=transf#misc_controls_transform) and i can change my camera position with Orbitcontrols (https://threejs.org/examples/?q=orbit#misc_controls_orbit)
The problem: The graphics seem kind of badly rendered, here some screenshots:
https://imgur.com/gallery/3FrZt3s
I don't really know which settings i should use here:
renderer = new THREE.WebGLRenderer();//{ antialias: true } ); With antialiasing or without?
//antialiasing is only needed when not using fxaa, right??
renderer.setPixelRatio( window.devicePixelRatio );
renderer.setSize( window.innerWidth, window.innerHeight );
renderer.gammaOutput = true;
renderer.physicallyCorrectLights = true;
camera = new THREE.PerspectiveCamera( 45, container.offsetWidth / container.offsetHeight, 0.001, 1000 );
camera.addEventListener( 'change', render ); //Is this necessary? Seems like it has no use
FXAA is probably necessary for outlinePass (also in the outlinePass-example linked above).
composer = new EffectComposer( renderer );
var renderPass = new RenderPass( scene, camera );
composer.addPass( renderPass );
effectFXAA = new ShaderPass( FXAAShader );
effectFXAA.uniforms[ 'resolution' ].value.set( 1 / window.innerWidth, 1 / window.innerHeight );
effectFXAA.renderToScreen = true;
composer.addPass( effectFXAA );
orbitControls = new OrbitControls( camera, renderer.domElement );
orbitControls.update();
orbitControls.addEventListener( 'change', render );
function render(){
renderer.render(scene, camera);
//composer.render(); // don't know if needed
}
So i have to say, i have not really a clue how i can solve the rendering issue and which settings i have to set to make the most out of my project. I'm happy for every hint and answers and maybe i can put these answers together and solve the issue.
When using post-processing with WebGL 1, you have to use FXAA for antialiasing. Passing { antialias: true } to true when creating WebGLRenderer activates MSAA but only if you render to the default framebuffer (directly to screen).
In any event, you configure the FXAA pass like so:
effectFXAA = new ShaderPass( FXAAShader );
effectFXAA.uniforms[ 'resolution' ].value.x = 1 / ( window.innerWidth * pixelRatio );
effectFXAA.uniforms[ 'resolution' ].value.y = 1 / ( window.innerHeight * pixelRatio );
composer.addPass( effectFXAA );
You have to honor the pixelRatio. Besides, setting renderToScreen to true is not necessary anymore. The last pass in the post-processing chain is automatically rendered to screen now.
When using EffectComposer, you do not call renderer.render(scene, camera);. You have to use composer.render(); instead.
camera.addEventListener( 'change', render ); can also be deleted. I'm not sure where you seen this but it has no effect.
three.js R109
I try to include, into a main window (representing sphere), a subwindow representing a zoomed view of this sphere.
For this moment, I can display a right-bottom subwindow containing the three axes of main scene. These axes are rotating the same way I make rotate the sphere with mouse.
Now, I would like to display, into this subwindow, a zoomed view of the sphere instead of having the subwindow with 3D axes.
From Can multiple WebGLRenderers render the same scene?, we can't use a second time the THREE.WebGLRenderer() for the subwindow.
From How to render the same scene with different cameras in three.js? ,a solution may be to use setViewport function but I don't know if I can display the zoom of sphere in this subwindow.
I tried to do in render() function :
function render() {
controls.update();
requestAnimationFrame(render);
zoomCamera.position.copy(camera.position);
zoomCamera.position.sub(controls.target);
zoomCamera.position.setLength(500);
zoomCamera.lookAt(zoomScene.position );
// Add Viewport for zoomScene
renderer.setViewport(0, 0, width, height);
renderer.render(scene, camera);
zoomRenderer.setViewport(0, 0, 200 , 200);
zoomRenderer.render(zoomScene, zoomCamera);
}
From your advices, is it possible technically to have 2 object in the same time (one on main window and the other on right-bottom subwindow) ?
Can I solve my issue with setViewport ?
And if someone could give documentation on setViewport, this would be great.
Thanks in advance.
UPDATE :
#WestLangley , thanks, I did your modifications (with a unique scene) but the content of inset does not appear.
I think that issue comes from the fact that I don't know how to make the link between the container of inset and the drawing of this inset.
For example, into the link above, I tried :
...
// Add sphere to main scene
scene.add(sphere);
camera.position.z = 10;
var controls = new THREE.TrackballControls(camera);
// If I include these two lines below, nothing appears
// zoomContainer = document.getElementById('zoomContainer');
// zoomContainer.appendChild(renderer.domElement);
// Zoom camera
zoomCamera = new THREE.PerspectiveCamera(50, zoomWidth, zoomHeight, 1, 1000);
zoomCamera.position.z = 20;
zoomCamera.up = camera.up; // important!
// Call rendering function
render();
function render() {
controls.update();
requestAnimationFrame(render);
zoomCamera.position.copy(camera.position);
zoomCamera.position.sub(controls.target);
zoomCamera.position.setLength(camDistance);
zoomCamera.lookAt(scene.position);
// Add zoomCamera to main scene
scene.add(zoomCamera);
// render scene
renderer.clear();
renderer.setViewport(0, 0, width, height);
renderer.render(scene, camera);
// render inset
renderer.clearDepth(); // important!
renderer.setViewport(10, height - zoomHeight - 10, zoomWidth, zoomHeight);
renderer.render(scene, zoomCamera);
}
Given that I have only one renderer, I don't knwo how to assign the "inset" container to the renderer of Viewport (i.e with the good renderer.setViewport)
Would you have got a workaround ?
You want to render another view of your scene in an inset window.
Create one scene, and two cameras.
Make sure you set autoClear to false when you instantiate the renderer.
renderer.autoClear = false;
Then, in your render loop, use this pattern
// render scene
renderer.clear();
renderer.setViewport( 0, 0, window.innerWidth, window.innerHeight );
renderer.render( scene, camera );
// render inset
renderer.clearDepth(); // important!
renderer.setViewport( 10, window.innerHeight - insetHeight - 10, insetWidth, insetHeight );
renderer.render( scene, camera2 );
three.js r.75
I'm about to learn three.js. I've downloaded the library from github: three.js
I tried to run the first example of three.js in the link above. The weird problem is, it's working in jsFiddle but it not working in my computer.
I've this error in console:
TypeError: document.body is null
[Break On This Error]
document.body.appendChild( renderer.domElement );
jsFiddle Live Working Demo
And here is my code exactly copied from link above:
And yes the three.js included in the page.
var camera, scene, renderer;
var geometry, material, mesh;
init();
animate();
function init() {
camera = new THREE.PerspectiveCamera( 75, window.innerWidth / window.innerHeight, 1, 10000 );
camera.position.z = 1000;
scene = new THREE.Scene();
geometry = new THREE.CubeGeometry( 200, 200, 200 );
material = new THREE.MeshBasicMaterial( { color: 0xff0000, wireframe: true } );
mesh = new THREE.Mesh( geometry, material );
scene.add( mesh );
renderer = new THREE.CanvasRenderer();
renderer.setSize( window.innerWidth, window.innerHeight );
document.body.appendChild( renderer.domElement );
}
function animate() {
// note: three.js includes requestAnimationFrame shim
requestAnimationFrame( animate );
mesh.rotation.x += 0.01;
mesh.rotation.y += 0.02;
renderer.render( scene, camera );
}
Your code is running before the browser has parsed the HTML document and built the DOM. That's why document.body is undefined.
Use a "load" event handler:
window.onload = function() {
// all that code here
};
Your jsfiddle version worked because that site does that for you; it's what the top selection thing is for on the left-hand side control panel.
Oh, also, you have to be careful when copying code from jsfiddle. Unless they've fixed it with the recent facelift, it's really easy to pick up weird hidden characters when you copy/paste, causing mysterious JavaScript errors.
Just write your code inside onload. it works fine.
window.onload = function() { // your code }