I am trying to display a textured plane with Three.js. I'm working with Forge RCDB.
At first, I managed to display the plane, but instead of being textured, it was completely black... I made some changes and now nothing is displayed anymore...
Here is my code :
render () {
var viewer=NOP_VIEWER;
var scene=viewer.impl.scene;
var camera = viewer.autocamCamera;
var renderer = viewer.impl.renderer();
renderer.render( scene, camera );
}
and in the function supposed to display the textured plane :
new THREE.TextureLoader(texture).load(texture, this.render);
tex.wrapS = THREE.RepeatWrapping //ClampToEdgeWrapping //MirroredRepeatWrapping
tex.wrapT = THREE.RepeatWrapping //ClampToEdgeWrapping //MirroredRepeatWrapping
tex.mapping = THREE.UVMapping
At the beginning I used loadTexture(). I managed to display my plane, but it was all black, and no texture was applied on it.
Then, I use THREE.TextureLoader().load(), in this case, I believe it is trying to find the image on localhost. The image is downloaded, I can see it on the console.
But now I get these errors :
Uncaught TypeError: scope.manager.itemStart is not a function
and :
Uncaught TypeError: renderer.render is not a function
Now the object is not displayed, even in black.
So I think this may be linked to render, but I don't understand how...
I found this, and it answers my question partially.
Finally, I decided to keep THREE.ImageUtils.loadTexture(), and I replaced MeshLambertMaterial by MeshBasicMaterial.
No need for render.
Related
I have a glb model which I am loading into my Vue project via Three js. I have managed to import several other models for practice, but the one I actually want on my page will not load. I have tried playing around with different scaling, positions, background colors (since the object is mostly black), and camera angles but I am not able to get it in frame no matter what I do. I am able to see the model perfectly in any regular gltf view, but I cannot see it in my project, what am I doing wrong here?
Edit: As a side note, I also tried changing the scale in blender, but that did not change the result.
const loader = new GLTFLoader();
let me = this; // must refer to the instance in vue in order to be added to the scene, have tested this with other models
loader.load(
'pantalla_ball_2.glb',
function(gltf) {
gltf.scene.traverse(function( node ) {
if ( node.isMesh ) { node.castShadow = true; }
});
gltf.scene.scale.set(1,1,1) // Have tried several scales from 0.01 to 200
me.scene.add(gltf.scene);
console.log("added") // added is successfully called every time
},
function(xhr) {
console.log(xhr);
},
function(err) {
console.log(err); // no errors appear in console
}
);
The object loading perfectly in 3d viewer below
Thanks to JP4 for his helpful comments, the issue was the ambient lighting was not strong enough to illuminate the object sufficiently enough to be seen. The axis helpers showed me that the object was there thanks to the gaps in the axes.
I am trying to implement drag controls on this text geometry that I am creating in the viewer. I create the text like so:
createText(params) {
const textGeometry = new TextGeometry(params.text,
Object.assign({}, {
font: new Font(FontJson),
params
}));
const geometry = new THREE.BufferGeometry;
geometry.fromGeometry(textGeometry);
const material = this.createColorMaterial(
params.color);
const text = new THREE.Mesh(
geometry, material);
text.scale.set(params.scale, params.scale, params.scale);
text.position.set(
params.position.x,
params.position.y,
10);
this.intersectMeshes.push(text);
this.viewer.impl.scene.add(text);
this.viewer.impl.sceneUpdated(true);
return text;
}
This works great, the meshes get added to the viewer, I can see them. Fantastic! Thus far, it is good. Now, I want to be able to drag them around with my mouse after I have added them. I noticed that Three.js already has drag controls built in, so I just implemented them like so:
enableDragging(){
let controls = new THREE.DragControls( this.viewer, this.viewer.impl.camera.perspectiveCamera, this.viewer.impl.canvas );
controls.addEventListener( 'dragstart', dragStartCallback );
let startColor;
controls.addEventListener( 'dragend', dragendCallback );
function dragStartCallback(event) {
startColor = event.object.material.color.getHex();
event.object.material.color.setHex(0x000000);
}
function dragendCallback(event) {
event.object.material.color.setColor(startColor);
}
}
After a big of debugging, I have seen where the problem occurs. For some reason, when I click on one of the meshes, the raycaster doesn't find any intersections. I.E. the array I get back is empty. No matter where I click on these objects.
Is my implementation wrong, or did I provision these meshes wrong to make them draggable? I have gotten the drag controls to work outside of the viewer, just not within it.
This will not work, looking at the code of DragControls, the viewer implementation is too different in the way it implements the camera. You would need to either implement a custom version of DragControls or take a look at my transform tool and adapt it for custom meshes:
Moving visually your components in the viewer using the TransformTool
I'm using Babylon.js 2.4.0.
I have a mesh (in the shape of a couch) loaded from a .obj file, and a camera set up like this:
let camera = new BABYLON.FreeCamera('camera1', new BABYLON.Vector3(0, 2, 0), scene);
camera.checkCollisions = true;
camera.applyGravity = true;
camera.ellipsoid = new BABYLON.Vector3(1, 1, 1);
camera.attachControl(canvas, false);
camera.speed = 0.5;
camera.actionManager = new BABYLON.ActionManager(scene);
I want to set up an event so that when I walk through the couch, "intersection" is logged to the console:
let action = new BABYLON.ExecuteCodeAction(
{ trigger: BABYLON.ActionManager.OnIntersectionEnterTrigger, parameter: { mesh: couchMesh }},
(evt) => {
console.log("intersection");
}
);
this.camera.actionManager.registerAction(action);
When I walk through the mesh, nothing is logged to the console.
I've created an example on the Babylon.js Playground using an example that they provide to check that it wasn't a problem with my mesh or camera set up, and it doesn't appear to be (the playground doesn't work either).
A camera in Babylon.js has no action manager, so even if you set one it won't really work.
To get this to work using action managers, you could define an invisible box around the camera, with a predefined size and attach the action manager to the mesh created. then set the mesh's parent to be the camera, and you are done. Here is your playground with those changes - http://www.babylonjs-playground.com/#KNXZF#3
Another solution is to use the internal collision system of babylon js, and set the camera's onCollide function to actually do something :) Here is en example - http://www.babylonjs-playground.com/#KNXZF#4
Notice that in the second playground, the camera won't go throug the box, as the collision system prevents it from doing so. I am not sure about your usecase, so it is hard to say which one of the two will work better.
If you need a "gate" system (knowing when a player moved through a gate, for example), use the 1st method. The 2nd is much cleaner, but has its downsides.
The following has worked well for me with svg files in R76 and R77 of Three.js, but in R78 I can only get it to work with pngs and jpgs
var floor = new THREE.TextureLoader();
floor.load('layout.svg', function ( texture ) {
var geometry = new THREE.PlaneBufferGeometry(4096, 4096);
var material = new THREE.MeshBasicMaterial( { map: texture } );
var mesh = new THREE.Mesh(geometry, material);
mesh.rotation.x = -Math.PI / 2;
scene.add(mesh);
} );
Since the problem arose, I added the progress and error functions to the load arguments...
function ( xhr ) {
console.log( (xhr.loaded / xhr.total * 100) + '% loaded' );
},
function ( xhr ) {
console.log( 'An error happened' );
}
...which tells me that in r78, the svg is 100% loaded, but it doesn't show up in the scene.
I should add that I also used this with phongmaterial for the reflection properties, and with transparency to show svg elements apparently floating in a skymap, great fun.
My question is, how can I get this working?
In R77, the svg would map to a planeBufferGeometry in FF or chrome.
In R78 it wont load, no errors shown on Firebug.
UPDATE
I investigated SVGLoader.js, SVGRenderer.js (which also requires Projector.js). Renders an svg file facing camera, responds to translations, but not rotations; i.e. no perspective view possible.
I added an xml header to the svg file, then a doctype, but threejs obviously isn't bothered by their absence. Using localhost makes no difference in FF, required in Chrome
So the problem seems to lie either with TextureLoader or WebGLRenderer, which have both had quite a reshuffle in R78.
I also briefly tried the dev version, which acts as r78 does.
Can anyone suggest where I go from here? I'm not sure whether to submit a bug, or a "feature restore".
Here's the desired effect.
Now my understanding is that prior to R78, svgs were internally converted to raster images before rendering.
To continue to use svg images in R78 onwards with Textureloader, convert them to pngs beforehand.
OR if the svg always faces the camera, use the THREE.SVGLoader.
OR If the separate svg elements are required in a three.js scene, convert them to paths and use this example to make a 2d shape or a 3d extruded shape.
My lesson here was not to rely on examples to just work. I should dig right in, break things, and stay up to date with issues on Github.
And use more than one browser; at the time, Chrome was going through a buggy phase not entirely unrelated to this.
I am using Three.js, version 71. I'm using Blender, version 2.73.
I created a textured collada object (.dae file) using Blender, and now I want to load it into my three.js scene. So far, I can only load models that get exported from blender that have no textures on them.
Here is how I create the textured collada object:
In blender, I simply use the default cube. Using the settings on the right, I add a texture to the cube. Here is the texture I am putting onto the cube (NOTE: it is 2048 X 2048, so it's a power of 2):
Here is an image of the cube in render mode to prove that the texture is on it:
Here are the export settings I used when I exported the cube as a collada from Blender:
Here is some code I used to try to load the textured collada:
var loader = new THREE.ColladaLoader();
var localObject;
loader.options.convertUpAxis = true;
loader.load( './models/test_texture.dae', function ( collada ) {
localObject = collada.scene;
localObject.scale.x = localObject.scale.y = localObject.scale.z = 32;
localObject.updateMatrix();
game.scene.add(localObject);
} );
Here is the error I got:
[.WebGLRenderingContext]GL ERROR :GL_INVALID_OPERATION : glDrawElements: attempt to access out of range vertices in attribute 2
I then googled that error message and someone said the I need to compute tangents. Here are my attempts at that and the errors I got:
var loader = new THREE.ColladaLoader();
var localObject;
loader.options.convertUpAxis = true;
loader.load( './models/test_texture.dae', function ( collada ) {
localObject = collada.scene;
localObject.scale.x = localObject.scale.y = localObject.scale.z = 32;
localObject.updateMatrix();
for (var i = collada.scene.children.length - 1; i >= 0; i--) {
var child = collada.scene.children[i];
// child.children[0] will give us the THREE.Mesh of the collada
if ( child.colladaId == "Cube" ) {
// ATTEMPT 1: Just tried computing tangets based on answer from neoRiley here: http://stackoverflow.com/questions/21200386/webgl-gl-error-gl-invalid-operation-gldrawelements-attempt-to-access-out-of
// child.children[0].geometry.computeTangents();
// ATTEMPT 2: Got this suggestion from Popov here: http://stackoverflow.com/questions/15717468/three-lod-and-normalmap-shader-fail
// child.children[0].geometry[ 0 ][ 0 ].computeTangents();
// child.children[0].geometry[ 1 ][ 0 ].computeTangents();
// ATTEMPT 3: Tried setting some update flags based on answer from Sayris here: http://stackoverflow.com/questions/13988615/webglrenderingcontext-error-loading-texture-maps
// child.children[0].geometry.buffersNeedUpdate = true;
// child.children[0].geometry.uvsNeedUpdate = true;
// child.children[0].material.needsUpdate = true;
// child.children[0].geometry.computeTangents();
}
};
game.scene.add(localObject);
} );
ATTEMPT 1 ERROR:
Uncaught TypeError: Cannot read property '0' of undefined
// Stack trace
three.js:9935 handleTriangle
three.js:9974 THREE.Geometry.computeTangents
myCode.js:116 (anonymous function)
ColladaLoader.js:204 parse
ColladaLoader.js:84 request.onreadystatechange
ATTEMPT 2 ERROR:
Uncaught TypeError: Cannot read property '0' of undefined
This came from own code. I didn't think geometry of THREE.Mesh is two dimensional, but I tried it anyway.
ATTEMPT 3 ERROR: (same as ATTEMPT 1 ERROR)
Uncaught TypeError: Cannot read property '0' of undefined
// Stack trace
three.js:9935 handleTriangle
three.js:9974 THREE.Geometry.computeTangents
myCode.js:116 (anonymous function)
ColladaLoader.js:204 parse
ColladaLoader.js:84 request.onreadystatechange
I decided to use the JSON loader instead because I couldn't get the collada one to work. The first thing I did was install the JSON exporter addon into Blender. I got the addon from the .zip file from my three.js download. It's in
three.js-r71/utils/exporters/blender/addons and it's called io_three. You just need to copy that folder and paste it in your Blender installation directory in Blender Foundation/Blender/2.73/scripts/addons.
You then have to enable it in Blender. To do that:
Click to File->User Preferences...
Click Add-ons
Type three in the search field
All the way to the right, click the check box to enable it
At the bottom left, click Save User Settings so you don't need to do this again. You'll know it's working if you see Three.js (.json) when you click File->Export.
I followed most of the instructions from this site to help me create and export a model: http://graphic-sim.com/B_basic_export.html
Here are the steps I used to create and export the model (I tweaked them a little bit from the site)
Start up Blender.
Look at the Properties editor (on the right).
Press the World context button. In the World panel click Ambient Color and change it from black to middle gray.
Press the Material context button. On the Diffuse panel change Intensity to 1.0. Do the same on the Specular panel. In the Shading panel put a check in the Shadeless box.
Press the Textures context button. Near the top in the Type drop down box, select Image or Movie. In the Image panel, browse to your image (make sure the image's dimensions are in a power of 2).
Choose the UV Editing screen layout (drop-down box to right of help menu at top).
With mouse cursor in 3D editor, go into edit mode (Tab key).
Unwrap (Press the U key). Choose Smart UV Project. Click Ok to accept defaults.
In the UV Editing screen, select your image using the menu at the bottom left (see screenshot)
Select Image->Save As Image. This image will need to be next to your JSON file that you will export.
Click File->Export->Three.js (.json).
To the left, select a few more export options (see screenshot for the ones i used, which I found by trial and error). I think I only added Face Materials, Materials, and Textures. You can then also click Save Settings to save these settings.
Put your JSON file and your image file that you saved earlier in your project folder.
Use the following code to load it:
var object;
var loader = new THREE.JSONLoader();
loader.load( "./models/test_texture.json", function(geometry, materials) {
object = new THREE.Mesh(geometry, materials[0]);
object.scale.set(32, 32, 32);
game.scene.add(object);
});