This is my code -
var scene = new THREE.Scene();
// adding a camera
var camera = new THREE.PerspectiveCamera(fov,window.innerWidth/window.innerHeight, 1, 2000);
//camera.target = new THREE.Vector3(0, 0, 0);
// setting up the renderer
renderer = new THREE.WebGLRenderer();
renderer.setSize(renderW, renderH);
document.body.appendChild(renderer.domElement);
// creating the panorama
// panoramas image
var panoramasArray = ["01.jpg"];
// creation of a big sphere geometry
// default segments of sphere w 8 h 6
var sphere = new THREE.SphereGeometry(400,100,40);
sphere.applyMatrix(new THREE.Matrix4().makeScale(-1, 1, 1));
// creation of the sphere material
//var sphereMaterial = new THREE.MeshBasicMaterial();
var sphereMaterial = new THREE.MeshBasicMaterial({color:0x0000FF});
sphereMaterial.map =THREE.ImageUtils.loadTexture(panoramasArray[1])
// geometry + material = mesh (actual object)
sphereMesh = new THREE.Mesh(sphere, sphereMaterial);
scene.add(sphereMesh);
camera.position.z = 900;
renderer.render(scene, camera);
I am running this on localhost apache server, but nothing visible in the browser. A black screen.
However, when I do this -
function render()
{
requestAnimationFrame(render);
renderer.render(scene, camera);
}
Everything works fine then ! I tried adding a callback on ImageUtils.Load but still nothing is rendered on screen until I put in an animation loop. I have been scratching my head since days now. Please explain n help. I dont want to put the animation frame just to display the image texture.
Loading a texture is done via an asynchronous XMLHTTPRequest. When the image is loaded after few hundred of milliseconds, the main script has executed the render call long time ago.
You have to move renderer.render(scene,camera) to the loader onLoad callback which is its third parameter :
THREE.ImageUtils.loadTexture( URL, mapping, function(){
renderer.render(scene,camera);
});
see the doc
Related
I'm using Three.js in a website project for a construction company. They want some 360° photos (photospheres) that I made using my phone (Google pixel 5). They also want to show a 3D representation of one of their projects so using Three.js seems to be the best solution.
Here is what it looks like in Google Photos:
Google Photos screenshot
And what it looks like in Three.js:
Three.js screenshot
You can see the colors are really bad (contrast, white balance, idk) compared to the original version.
It's the first time I use Three.js so here is my code:
Scene:
async connectedCallback() {
// Scene
this.scene = new THREE.Scene();
// Background equirectangular texture
const background_img = this.getAttribute('background');
if (background_img) this.loadBackground(background_img);
// Camera
this.camera = new THREE.PerspectiveCamera(60, this.clientWidth / this.clientHeight, 1, 5000);
// Lights
// this.scene.add(new THREE.HemisphereLight(0xffeeb1, 0x080820, 0));
const spotLight = new THREE.SpotLight(0xffa95c, 5);
spotLight.position.set(20, 20, 20);
spotLight.castShadow = true;
spotLight.shadow.bias = -0.0001;
spotLight.shadow.mapSize.width = 1024 * 4;
spotLight.shadow.mapSize.height = 1024 * 4;
this.scene.add(spotLight);
// Renderer
this.renderer = new THREE.WebGLRenderer({ antialias: true });
this.renderer.toneMapping = THREE.ReinhardToneMapping;
this.renderer.toneMappingExposure = 2.3;
this.renderer.shadowMap.enabled = true;
this.renderer.setPixelRatio(devicePixelRatio);
this.renderer.setSize(this.clientWidth, this.clientHeight);
this.appendChild(this.renderer.domElement);
// Orbit controls
this.controls = new OrbitControls(this.camera, this.renderer.domElement);
this.controls.autoRotate = true;
// Resize event
addEventListener('resize', e => this.resize());
// Load model
const url = this.getAttribute('model');
if (url) this.loadModel(url);
// Animate
this.resize();
this.animate();
// Resize again in .1s
setTimeout(() => this.resize(), 100);
// Init observer
new IntersectionObserver(entries => this.classList.toggle('visible', entries[0].isIntersecting), { threshold: 0.1 }).observe(this);
}
Background (photosphere):
loadBackground(src) {
const equirectangular = new THREE.TextureLoader().load(src);
equirectangular.mapping = THREE.EquirectangularReflectionMapping;
// Things Github Copilot suggested, removing it does not change colors so I thing it's not the problem
equirectangular.magFilter = THREE.LinearFilter;
equirectangular.minFilter = THREE.LinearMipMapLinearFilter;
equirectangular.format = THREE.RGBFormat;
equirectangular.encoding = THREE.sRGBEncoding;
equirectangular.anisotropy = 16;
this.scene.background = equirectangular;
}
As Marquizzo said in a comment:
You’re changing the renderer’s tone mapping and the exposure. It’s normal for the results to be different when you make modifications to the color output.
I kept saying the problem is not the lines Github Copilot suggested because I got the same result when removing those but I tried removing one line at a time and it turns out that this line was the problem:
equirectangular.format = THREE.RGBFormat;
Now my result is a bit dark compared to Google Photos's one but the colors are finally accurate! I still have a lot of things to learn haha.
Three.js screenshot after removing the problematic line
Thank you Marquizzo!
I'm trying out Babylon.js to see if it has better capabilities and more versatility than Three.js. I wanted to test out the quality of 3D models in both by importing a simple model, but I cannot seem to get the MTL data to display the colors on the penguin that I made in Blender. How do I get it to show, or what do I need to modify about the model or file format to get it to work?
Also, I have already tried the STL format which does not display colors either.
var canvas = document.getElementById("renderCanvas");
var engine = new BABYLON.Engine(canvas, true);
var createScene = function () {
var scene = new BABYLON.Scene(engine);
var camera = new BABYLON.ArcRotateCamera("Camera", Math.PI / 2, Math.PI / 2, 2, new BABYLON.Vector3(0,0,5), scene);
camera.attachControl(canvas, true);
var light1 = new BABYLON.HemisphericLight("light1", new BABYLON.Vector3(1, 1, 0), scene);
var light2 = new BABYLON.PointLight("light2", new BABYLON.Vector3(0, 1, -1), scene);
BABYLON.SceneLoader.Append("./", "penguinmodel.obj", scene, function (scene) {});
return scene;
};
Full code and files at:
https://repl.it/#SwedishFish/Babylon-Testing
Hello you should export your model using gltf exporter. This is by far a more richer format and very well supported by Babylon.js
I use THREE.js Loading Manager to check the object or texture is loaded .
var Mesh;
var TLoader = new THREE.TextureLoader(manager);
var manager = new THREE.LoadingManager();
manager.onProgress = function ( item, loaded, total ) {
console.log( item, loaded, total );
};
manager.onLoad = function()
{
console.log(Renderer.domElement.toDataURL());
}
function renderModel(path,texture) {
var Material = new THREE.MeshPhongMaterial({shading: THREE.SmoothShading});
Material.side = THREE.DoubleSide;
var Loader = new THREE.JSONLoader(manager);
Loader.load(path,function(geometry){
geometry.mergeVertices();
geometry.computeFaceNormals();
geometry.computeVertexNormals();
TLoader.load(texture,function(texture){
Mesh = new THREE.Mesh(geometry, Material);
Mesh.material.map =texture;
Scene.add(Mesh);
});
});
}
and i just call the renderModel function in a loop .
But the console.log(Renderer.domElement.toDataURL()) output from the manager.onload function is giving only image of the some 3d models not all the ones in the scene
I just want to get 'Renderer.domElement.toDataURL()' when all the 3d models are rendered in the scene
now only the image of 2 or 3 models are getting ,but in scene all the items are loaded.
The renderer renders an image each frame. When the loading of all the objects is completed, the onLoad method of the manager is called immediately. So, the last objects were added to the scene and you retrieve the image data without giving the renderer a chance to render a new image. You need to wait for a new frame. Maybe a timeout with e.g. 200 milliseconds will help.
EDIT
You could also call the render method in your onLoad, so the renderer draws a new image before you call console.log.
manager.onLoad = function()
{
Renderer.render( Scene, camera );
console.log(Renderer.domElement.toDataURL());
}
im currently trying to update individual textures based on cursor position on imported JSON model. Below is the import code for the JSON model
var loader = new THREE.JSONLoader();
loader.load('slicedNew.json', function(geometry, materials) {
console.log(materials);
mesh = new THREE.Mesh(geometry, new THREE.MeshFaceMaterial( materials ));
mesh.scale.x = mesh.scale.y = mesh.scale.z = 8;
mesh.translation = THREE.GeometryUtils.center(geometry);
scene.add(mesh);
console.log('IMPORTED OBJECT: ', mesh);
});
Below is raycaster code for when the cursor is over a particular material
switch(intersects[0].face.materialIndex)
{
case 0:
console.log('0 material index');
intersects[0].object.material.needsUpdate = true;
intersects[0].object.material.materials[0] = new THREE.MeshLambertMaterial({
map: crate
});
break;
Anytime I hover over a certain side of the shape the texture is loaded but it is always black, even initialise the model to use the texture it still appears as black, yet I can load in a simple cube and map the image as a texture to this shape with no issues.
Any help would be appreciated.
I'm using three.js to create a minecraft texture editor, similar to this. I'm just trying to get the basic click-and-paint functionality down, but I can't seem to figure it out. I currently have textures for each face of each cube and apply them by making shader materials with the following functions.
this.createBodyShaderTexture = function(part, update)
{
sides = ['left', 'right', 'top', 'bottom', 'front', 'back'];
images = [];
for (i = 0; i < sides.length; i++)
{
images[i] = 'img/'+part+'/'+sides[i]+'.png';
}
texCube = new THREE.ImageUtils.loadTextureCube(images);
texCube.magFilter = THREE.NearestFilter;
texCube.minFilter = THREE.LinearMipMapLinearFilter;
if (update)
{
texCube.needsUpdate = true;
console.log(texCube);
}
return texCube;
}
this.createBodyShaderMaterial = function(part, update)
{
shader = THREE.ShaderLib['cube'];
shader.uniforms['tCube'].value = this.createBodyShaderTexture(part, update);
shader.fragmentShader = document.getElementById("fshader").innerHTML;
shader.vertexShader = document.getElementById("vshader").innerHTML;
material = new THREE.ShaderMaterial({fragmentShader: shader.fragmentShader, vertexShader: shader.vertexShader, uniforms: shader.uniforms});
return material;
}
SkinApp.prototype.onClick =
function(event)
{
event.preventDefault();
this.change(); //makes texture file a simple red square for testing
this.avatar.remove(this.HEAD);
this.HEAD = new THREE.Mesh(new THREE.CubeGeometry(8, 8, 8), this.createBodyShaderMaterial('head', false));
this.HEAD.position.y = 10;
this.avatar.add(this.HEAD);
this.HEAD.material.needsUpdate = true;
this.HEAD.dynamic = true;
}
Then, when the user clicks any where on the mesh, the texture file itself is update using canvas. The update occurs, but the change isn't showing up in the browser unless the page is refreshed. I've found plenty of examples of how to change the texture image to a new file, but not on how to show changes in the same texture file during runtime, or even if it's possible. Is this possible, and if not what alternatives are there?
When you update a texture, whether its based on canvas, video or loaded externally, you need to set the following property on the texture to true:
If an object is created like this:
var canvas = document.createElement("canvas");
var canvasMap = new THREE.Texture(canvas)
var mat = new THREE.MeshPhongMaterial();
mat.map = canvasMap;
var mesh = new THREE.Mesh(geom,mat);
After the texture has been changed, set the following to true:
cube.material.map.needsUpdate = true;
And next time you render the scene it'll show the new texture.
Here is all the basics of what you have to know about "updating" stuff in three.js: https://threejs.org/docs/#manual/introduction/How-to-update-things