Following the previous question, I'm dealing with building models in BufferGeometry, and realize that the transparent flag affects the render order: objects with transparent materials will be rendered after non-transparent ones.
Also, I read from this thread, did an experiment on JSFiddle and realized the render order of faces in BufferGeometry is the same as the order they are specified in buffers, but not distance from cameras. (In the above experiment, I specify a closer triangle first in the buffer, and it occludes others behind it.)
So my question is: is it possible to set render order of faces manually in BufferGeometry?
In my case, I may need to change transparency of building elements dynamically.
(I've read the thread saying we can set renderOrder of Object3D.)
Thank you.
Faces are rendered in the order in which they appear in the BufferGeometry.
If you have to vary the transparency of scene elements dynamically, I suggest you maintain separate geometries, each paired with its own material.
The renderer will render the objects having transparent = false first. Then it will render the objects having transparent = true.
You will likely find you have fewer artifacts if you use the following settings for your transparent materials:
material.transparent = true;
material.opacity = 0.5; // or as desired
material.depthTest = true; // the default
material.depthWrite = false; // use for transparent materials only
Also, self-transparency is particularly tricky. An example would be a semi-transparent cube (or building). One way to reduce artifacts in such situations is to render the object twice: first with material.side = THREE.BackSide and then again with material.side = THREE.FrontSide. You can use object.renderOrder to force a specific render order between objects.
three.js r.75
Related
I need to create text with inset shadow on my object in three.js, which looks like this:
Something like ring with engraved text.
I think the easier way to do that would be to use a normal-map for the engraving, at least if the text doesn't have to be dynamic (here's how you can export a normal-map from blender). And even if it needs to be dynamic it might be easier to create a normal-map dynamically in a canvas than to actually create a geometry for the engraving.
Another option would be to actually create a geometry that contains the engraving. For that you might want to look at the ThreeCSG-library, that let's you use boolean operators on geometries: You create the 3D-text mesh, warp and align it to the curvature of the ring and finally subtract it from the ring-mesh. This should give you the ring with the engraving spared out.
In fact, I was curious how this would actually work out and implemented something very similar here: https://usefulthink.github.io/three-text-warp-csg/ (source here).
In essence, This is using ThreeCSG to subtract a text-geometry from a cylinder-geometry like so:
const textBSP = new ThreeBSP(textGeometry);
const cylinderBSP = new ThreeBSP(cylinderGeometry);
const resultGeometry = cylinderBSP.subtract(textBSP).toGeometry();
scene.add(new THREE.Mesh(resultGeometry, new THREE.MeshStandardMaterial());
Turns out that the tessellation created by threeCSG really slow (I had to move it into a worker so the page doesn't freeze for almost 10 seconds). It doesn't look too good right now, as there is still a problem with the computed normals that i haven't figured out yet.
The third option would be to use a combination of displacement and normal-maps.
This would be a lot easier and faster in processing, but you would need to add a whole lot of vertices in order to have vertices available where you want an displacement to happen. Here is a small piece of code by mrdoob that can help you with creating the normal-map based on the displacement: http://mrdoob.com/lab/javascript/height2normal/
I have a 3D model that was loaded as an obj file into Three.js. The model itself is a furniture.
The problem is, that furniture material is dynamic and is different in size (thickness). I need to have to able to made thickness of material bigger, but the total size of the model can't be changed. So scaling isn't an option.
Is there a way I can resize parts of the model (few specific meshes) and doesn't compromise the structure of mesh itself ? I need to change thickness of the structure, but internal parts of the model shouldn't change.
The only solution I can think of is to change scale of some of the meshes and then to change global position of the other meshes based on that. Is this the right way ?
object.traverse(function(child) {
if (child instanceof THREE.Mesh) {
// resize and reposition some of the meshes
}
});
Possible ways to solve it:
Bones
Deformation
Well, if all of the meshes are separate primitives, then you can just change the scale of each part you want to change along one axis, and just set up anchor points to constrain to the outside. So for pieces on the border, you scale the empty object that they're attached to so that they maintain the outer shell.
EG:
OOOOOO
OMMMMMMO
OMmmmmMO
OMmmmmMO
OMMMMMMO
OOOOOO
where O is an Object3D carrying the adjacent Mesh-M, and the m's represent meshes that are scaled themselves. This way if you adjust the scale of all 'm's and 'O's, the outer shell stays in place,
But you're on the right track with the traversal. You'll just have to do this.
For an easy way to traverse, I would give everything you want to change some attribute in their .userData object. Because in some cases you'll want to scale empty objects (O) (so that you can effectively move the anchor point) whereas at others you'll want to scale the meshes in place (m). So it's not purely a mesh based operation (since meshes want to scale from their center). Doing some tagging makes the traversal simpler:
object.traverse(function(child){
if(child instanceof THREE.Mesh){
if(child.userData.isScalable){
//do the scaling.
}
}
});
and if you set up the heirarchy and .userData tagging correctly, then you just scale things and you keep the outer shell.
Is this what you're asking? because the question is unclear.
You could use Clara.io, it is built on top of ThreeJS and allows for you to run operators on geometry that you setup in Clara.io scenes. There is a thickness operator in Clara.io that you can use.
Documentation here: http://clara.io/learn/sdk/interactive-experiences
Anything you can do in the Clara.io editor you can do in an interactive-embed.
You can use your method of changing different meshes sizes and other positions, but when you use object.scale.set( x, y, z ); the browser has to change the scale of the model for every frame rendered. So if you use this for lots of meshes, it can decrease your game's performance. The best way to go would be to use a 3d editor like Blender. It is easier and more efficient.
I'm using easeljs and attempting to generate a simple water simulation based on this physics liquid demo. The issue I'm struggling with is the final step where the author states they "get hard edges". It is this step that merges the particles into an amorphous blob that gives the effect of cohesion and flow.
In case the link is no longer available, in summary, I've followed the simulation "steps" and created a prototype for particle liquid with the following :
Created a particle physics simulation
Added a blur filter
Apply a threshold to get "hard edges"
So I wrote some code that is using a threshold check to color red (0xFF0000) any shapes/particles that meet the criteria. In this case the criteria is any that have a color greater than RGB (0,0,200). If not, they are colored blue (0x0000FF).
var blurFilter = new createjs.BlurFilter(emitter.size, emitter.size*3, 1);
var thresholdFilter = new createjs.ThresholdFilter(0, 0, 200, 0xFF0000, 0x0000FF);
Note that only blue and red appear because of the previously mentioned threshold filter. For reference, the colors generated are RGB (0,0,0-255). The method r() simply generates a random number up to the passed in value.
graphics.beginFill(createjs.Graphics.getRGB(0,0,r(255)));
I'm using this idea of applying a threshold criteria so that I can later set some boundaries around the particle adhesion. My thought is that larger shapes would have greater "gravity".
You can see from the fountain of particles running below in the attached animated gif that I've completed Steps #1-2 above, but it is this Step #3 that I'm not certain how to apply. I haven't been able to identify a single filter that I could apply from easeljs that would transform the shapes or merge them in any way.
I was considering that I might be able to do a getBounds() and draw a new shape but they wouldn't truly be merged at that time. Nor would they exhibit the properties of liquid despite being larger and appearing to be combined.
bounds = blurFilter.getBounds(); // emitter.size+bounds.x, etc.
The issue really becomes how to define the shapes that are blurred in the image. Apart from squinting my eyes and using my imagination I haven't been able to come to a solution.
I also looked around for a solution to apply gravity between shapes so they could, perhaps, draw together and combine but maybe it's simply that easeljs is not the right tool for this.
Thanks for any thoughts on how I might approach this.
I'm creating html5 paint application and I'm currently working on blending layers. I'm wondering which of the approaches would be the best (fastest and gimp/photoshop like) to build in such program. My layers (canvases) are stacked.
Changing blend mode by CSS3 property (probably very fast - blending directly on graphics card)
Having hidden canvases (layers) and one canvas to show flattened image to user. So we draws on these hidden canvases and there is some mechanism which taking each of hidden canvas and draw it to user viewable canvas (probably slower but each context.drawImage(...) is optimized and computed on graphics card)
Hidden canvases (layers) are truly virtual. There is no hidden canvases elements at all. Instead there is some structure in memory which imitate canvas. These structure just saving user actions performed on this virtual layer. Then when repainting is required, user canvas is reconstructed by taking each operations from each virtual layers and paint it (true paint). Operations must be correctly ordered is such virtual layer structure (this is probably slow but may be faster than 2nd approach(we don't wasting time to draw anything on layer, just storing options how we will draw on real layer)?)
Blending by WebGL(probably fast)
Compute manually each pixel and show score to user (super slow?)
I'm using second approach but i'm not really sure it's the best (especially for more layers). I was wondering do you know any other approach, maybe you would suggest what would be better to implement blending operations to make it work as in Gimp/Adobe Photoshop
My way to implement this was to have multiple canvases, each of them store it's own blending mode. But it's just virual canvas (not drawable for user). Then there is 33 times per second timer (made by RequestAnimationFrame) which grab all this canvases, flatten them and draw it to viewport. For each layer I perform blend mode change on viewport canvas. It works perfect since drawImage(...) is computed on GPU and it's really fast.
I think code is fairly easy to understand and to apply in own solutions:
Process(lm: dl.LayerManager) {
var canvasViewPort = lm.GetWorkspaceViewPort();
var virtualLayers = lm.GetAllNonTemporary();
canvasViewPort.Save();
canvasViewPort.Clear();
canvasViewPort.SetBlendMode(virtualLayers[0].GetBlendMode());
canvasViewPort.ApplyBlendMode();
canvasViewPort.DrawImage(virtualLayers[0], 0, 0, canvasViewPort.Width(), canvasViewPort.Height(), 0, 0, canvasViewPort.Width(), canvasViewPort.Height());
for (var i = 1; i < virtualLayers.length; ++i) {
var layer = virtualLayers[i];
var shift = other.GetShift();
canvasViewPort.SetBlendMode(virtualLayers[i].GetBlendMode());
canvasViewPort.ApplyBlendMode();
canvasViewPort.DrawImage(layer, 0, 0, layer.Width(), layer.Height(), shift.x, shift.y, layer.Width(), layer.Height());
}
canvasViewPort.Restore();
}
Don't be affraid of this code. It maintains underlaying canvas, pure CanvasRenderingContext2D.
I will perform such optimization: I Won't redraw canvasViewPort until something changes on any virtual canvas.
Next optimization I would perform: Get selected layer, cache all before and all after current layer.
So it will looks like:
CurrentLayer: 3 (which might be affected by some tool)
[1]:L1
[2]:L2
[3]:L3
[4]:L4
[5]:L5
[6]:L6
Draw to temporary with appropiate blend mode [1, 2] to tmp1 and [4, 5, 6] to tmp2.
When something change on third layer, I just need to redraw tmp1 + [3] + tmp2.
So it's just only 2 iteration's. Super fast.
Is it possible to merge vertices only at render time? I'm doing a series of morphs which requires the vertex list to stay the same, however I want to merge the vertices to get a smooth reflection on a cube camera. Any one aware of a command similar to unmerge vertices?
Have you tried doing it? It should work.
You'll need to call
geometry.verticesNeedUpdate()
geometry.elementsNeedUpdate()
to tell three.js that the vertices and faces, respectively, have changed. There are other update functions you may need to call too (for instance if normals have changed). More details here: https://github.com/mrdoob/three.js/wiki/Updates
Note the comment on that page that the total number of vertices can't change. This may require you to do the merge on a temp geometry and then copy the vertices to your rendered geometry.
Alright, this is not in the documentation section, but you need to use the explode modifier as demonstrated in this example: http://threejs.org/examples/#webgl_geometry_tessellation
var explodeModifier = new THREE.ExplodeModifier();
explodeModifier.modify( geometry );
geometry.computeFaceNormals();
geometry.computeVertexNormals();
//This will undo the geometry.mergeVertices();