I'm new to Three.js; Is there a way to get separate objects (elements/shells) from a Mesh or Geometry object?
If there's no native way to do that, how could I implement a method for separating faces that are not connected to an ensemble and then detaching them so they form there own Mesh object?
Background: I'm loading a 3d model and would like to be able to unify this model using ThreeBSP, I need to separate the objects before applying the boolean operations.
Thank you
Dig into the Geometry object. It has an array of faces. I don't think there is a native way to check to see which ones are contiguous.
Shooting from the hip, "contagious" in this case means faces that share points with something something that shares points with something that shares points etc. So pick a face. Store it's defining points, find any faces that also use those points, store there points, find all the face that share any of the expanded points etc. Check out the "Flood Fill" function for some direction how to use recursion, as well as how to do the bookkeeping needed to avoid duplicates from keeping you searching in a loop forever.
Good Luck
Related
I have a procedurally generated game using tilemaps for easier generation and pathfinding, and I need to be able to have musltiple layers for things like seperate depths for different groups of objects, but the room data is specified at the creation of the map, not at the creation of the layer, unless something like tiled is used, which I can't use due to the fact that my game will be procedurally generated. I could go through the array and place individual tiles with a loop, but is there any other solution that I'm missing? Thanks!
You could do the something I mentioned in this answer, basicly, creating two maps with the same dimensions, set the z-order with setDepth, and if the map above has transparent tiles, or tiles with the ID -1, the map below should be visible.
I personally never used it for larger maps, but I assume it should not cause performance issue, and is an easy solution.
I need to draw thousands of points & lines witch have position, size and color attributes and they're position is dynamic (interactive in dragging).
I was using buffer Geometry until now but now I had found two more things
instancing
interleaved buffer
I want to know what are these and how they work? What are advantages and disadvantages of them? Are they better for my case or simple buffer Geometry is best for me?
Can you give me a full comparison between these three?
Interleaving means that instead of creating multiple VBO to contain your data, you create one, and mix your data. Instead of having one buffer with v1,v1,v1,v2,v2,v2... and another with c1,c1,c1,c2,c2,c2...., you have one with v1,v1,v1,c1,c1,c1,v2,v2,v2,c2,c2,c2... with different pointers.
I'm not sure what the upside of this and am hoping that someone with more experience can answer this better. I'm not sure what happens if you want to mix types, say less precision for texture coordinates. Not sure if this would even be good practice.
On the downside, if you have to loop over this and update positions for example, but not the colors, that loop may be slightly more complicated then if it was just lined up.
Instancing is when you use one attribute across many geometry instances.
One type would be, say a cube, v1,v1,v1,v2,v2,v2....v24,24,24, 24 vertices describing a cube with sharp edges in one attribute. You can have another one with 24 normals, and another one with indecis. If you wanted to position this somewhere, you would use a uniform, and do some operation with it on the position attribute.
If you want to make 16683 cubes each with an individual position, you can issue a draw call with the same cube bound (attributes), but with the position uniform changed each time.
You can make another, instance attribute, pos1,pos1,pos1.....pos16683,pos16683,pos16683 with 16683 positions for that many instances of the cube. When you issue an instanced drawcall with these attributes bound, you can draw all 16683 instances of the cube within that one call. Instead of using a position uniform, you would have another attribute.
In case of your points this does not make sense since they are mapped 1:1 to the attribute. Meaning, you assign the position of one point, inside of that attribute and there is no more need to transform it with some kind of a uniform. With instancing, you can turn your point into something more complex, say a cube.
I'm looking to create a 3D character using Three.js, with the character having different faces for different textures and also having different items (E.g. holding a sword or a dagger).
Is it possible to use one 3D object and show/hide different parts of that object using three.js?
For example, the object could have a dagger and a sword, but the user only sees the dagger in one scenario, or the sword in another scenario?
Or would I need to use different 3D objects and load them all into the same canvas?
Thanks in advance!
create a custom shader and add custom attribute to your geometry - every vertex will have a attribute with which you can hide(discard) every fragment it creates
there is a complete example here
http://threejs.org/examples/#webgl_buffergeometry_selective_draw
I'm finally understanding the performance impact of merged geometries. I've noticed the GPU has draw calls for all shared geometries combined with materials (or perhaps its just the material count). My question is why are developers forced to figure out how to merge geometries when its clear that the three.js will just divide everything out of your merged geometry anyways?
I'm probably missing something.
It is difficult to make general statements about WebGLRenderer performance, because each use case is different.
But typically, performance will improve if the number of draw calls is reduced. If you have multiple meshes, and they all share the same material, then by merging the mesh geometries into a single geometry, you reduce the number of draw calls to one.
Remember, this approach makes sense only if your multiple meshes are static relative to each other, because once merged, they cannot be transformed independently.
And yes, it is true, that if your mesh has multiple materials, the the renderer will break the geometry up into smaller geometries -- one for each material -- prior to rendering.
Here is a useful tip: In the console, type renderer.info and inspect the resulting object returned. This will give you insight into what is going on behind the scenes.
You can also display renderer.info live on your screen by using this utility.
three.js r.68
I'm making a game with three.js in which an mesh will be rendered in greater detail as the player gets closer (i.e different geometry and material). This seems like it would be a common requirement in open world type games, and I was wondering what standard procedure is in three.js
It looks like changing the geometry of a mesh is quite computationally costly, so should I just store a different mesh for each distance, or is there an existing data structure for this purpose?
You want to use level-of-detail (LOD) modeling.
There is an example of that in the following three.js example: http://threejs.org/examples/webgl_lod.html
In the example, press the 'W/S' keys to see the effect.
three.js r.62