Access to render pipeline in XML3D: Object highlightning - javascript

I want to make object to become highlighted when selected in order to do this I need a custom shader that scales that renders the model outline - this part of the task I'm familiar with - XML3D provides a way to implement custom shader.
But the missing piece is having access to render pipeline:
Its impossible to make nice highlighting without copying the model and painting it over old one
or rendering the scene in two passes (postprocessing).
Creating another model copy in the usual way (attaching new element to dom tree) won't solve the issue since I need also control scene blending.
How to I get it done with with xml3d?
Is it possible without digging deep into the library?

In general there are four approaches to implement highlighting:
you can exchange materials back and forth (not very efficient)
you can use material overrides for highlighted objects, which will adapt one or more parameters of the current shading (e.g. emissive color)
you can use a custom shader in combination with an arbitrary uniform attribute which indicates that the object should be highlighted. In the shader, you can adapt color and rendering based on the attribute. E.g. you could do a rim-highlighting or a wireframe rendering. Here is an example for a shader that colors an object if the uniform selected has a specific value.
For instance:
<mesh id="foo">
<data src="mesh-data.json"></data>
<float name="selected">0</float>
</mesh>
To highlight this object: $("#foo float[name=selected]").text("1");
you can adapt the rendering pipeline to render the highlighted object twice and blend it in various ways
If sufficient for your use-case, I would recommend approach 3, as it is not very intrusive. The interface for creating custom rendering pipeline is not yet very stable.

As ksons mentioned the render pipeline interface is undergoing some major changes right now, XML3D 4.8 is the last version that supports it in its current form. Version 5.0 will likely re-introduce it in a (hopefully) much improved form.
We use a custom render pipeline in one of our internal projects to draw a wireframe overlay onto selected models, I've posted a simplified version of this pipeline as a Gist for you to have a look at. Essentially it renders the scene using the standard internal render pass and then does a second pass to draw the highlighted objects in a crude wireframe mode without depth testing.
As I said this works in v4.8, if you need this functionality in v4.9 then please open an issue and I'll see about re-enabling it as a minor release.

Related

WebGL - advanced rendering with instancing

I have a library that renders models of specific formats.
These formats have animations, and constantly changing things like dynamic mesh colors (aka vertex color per mesh), meshes dynamically being visible or not, and many other things, all depending on the animations.
All was fine, and then I wanted to make the rendering actually fast and switched to instanced rendering.
Since then everything became a nightmare, and no matter how many designs I try, I simply cannot get it to work properly.
The code works in such a way that every mesh in every model owns a bucket object, which holds shared buffers for many instances of this mesh.
For example, every instance needs the mentioned dynamic vertex color, so the bucket holds one buffer with enough memory for some N vertex colors.
When a new instance is added, a free index into this buffer is given to it.
Every instance writes to this index as if it's a C pointer, and at the end the whole buffer is updated with the new values.
So far so good.
When an instance is removed, there's a hole in the buffer.
What I did was to "move" the last instance to the one that got deleted, essentially the same as changing pointers.
This isn't so nice, but it's fast.
Then the issues came in two ways - textures, and cameras.
Let's say I want one instance to use a different texture, this is impossible with instanced rendering directly.
So now it's split. A mesh holds an array of mesh "views", and each of these views hold the buckets.
A view can define its own texture overrides, and so every view is an additional N render calls.
This is so-so, but I couldn't think of anything else.
And finally, the actual problem I am now totally stuck on - cameras.
I want to be able to define the camera each instance uses.
It looked as if I could just add a matrix as an attribute, but that doesn't allow to change the viewport, which is kind of important.
I cannot for the life of me think of any way that actually works, that allows all of the above. Switching cameras, switching textures, and instanced rendering.
To make matters worse, I also can't think of any nice way to cull instances.
The only real way I can see to really support everything, is to dynamically keep re-creating buffers for each render call, and copying tons of data each frame, both on the CPU side and GPU side.
For example - sort all of the instances per-camera into different JS arrays (culling can be done here), create a typed array for each one, copy all of the instance data from all of the instances into this array, bind it to a buffer, and redo this for every group, for every "view", for every frame.
This is basically OOP rendering, switched to instanced rendering right at the final step, and I doubt it will be any kind of fast...
tl;dr instanced rendering is a hell to actually use. Are there any known proper techniques to use it with different textures/cameras/whatnot?

Animated buffergeometry with morphattributes not updating shadow

I have converted the MD2 code from the library to use THREE.BufferGeometry instead of THREE.Geometry to vastly improve memory footprint. To do this I just convert the model to THREE.BufferGeometry after it is done loading. I also had to modify the MorphBlendMesh code to use attributes for the morphTargetInfluences.
This is working great except for issue: the shadows don't update during animation, it always uses the shadow from the first frame of the animation.
I haven't seen any documentation on morphTargetInfluences attributes, so I don't have much to go on.
I can't really post the code since it is too much spread out across the code base.
I am just hoping that someone out there might have some insight as to how shadows get updated during morph animation, and maybe point me in the right direction on how to research this issue.
I have found the problem, and a workaround!
The code in the shader renderer is checking to see if geometry.morphTargets has a non-zero length before it decides to set the 'usemorphing' flag. The converted buffergeometry does not have a .morphTargets field since this information appears to have moved to .morphAttributes for buffergeometries.
My hack workaround is to add a fake .morphTarget list like so:
Buffergeometry.morphTargets = [];
Buffergeometry.morphTargets.push(0);

Code reuse (DRY) in procedural drawing?

I am building a workflow tool that renders with d3js. I initially created the tool in an "editor" mode, where users can drag and manipulate nodes and dependencies within a workflow. Now I am having a hard time rationalizing the proper way to make my current draw procedure support "view" and "edit" modes and stay DRY.
My initial code is basically one long draw procedure:
Define JS prototypes
Create initial d3js grid (zoom/pan grid)
Function(s): Define menu system
Function: Draw dependencies (svg groups in d3js)
Function: Draw nodes (svg groups in d3js)
Get data, iterate & instantiate, iterate and draw dependencies, iterate and draw nodes
Adding switch logic within each of the functions where mode-behavior or SVG structure differs seems like the road to spaghetti-code.
It feels equally wrong to have 3 independent procedures for each mode - but I'm starting to think that having a distinct "draw" procedure for each mode is more flexible way to go about having 3 modes.
Maybe I'm thinkin' about this all wrong. Maybe I should render everything in read-only mode first and then apply any "edit" or "monitor" mode behaviors or visual additions afterwards.
What's the best way to support multiple similar, but different draw procedures in a DRY fashion?

three.js and depthFunc

From looking at the code, it seems that three does not give much control over the depthFunc. I would like to confirm that it's only set once as the default GL state, and not available say, in the material?
I'm not familiar with all examples, and was wondering if this is happening somewhere?
If not, what would be the best approach to set the depthFunc to gl.EQUAL for example, when a specific draw call is being made i.e. a mesh with a material?
Is something like toggling between scenes i.e. use one to render stuff, then use another one to render stuff on top of the first one a good solution for this? That's the only example that i've seen of tweaking the otherwise sorted objects.
It's currently in the dev branch of three.js: the pull request.

Backbone.js and three.js - MVC with canvas

I am in the planning stages of developing a small web-app that does some interactive data visualization in a 3D space.
For widest browser compatibility, three.js looks like the best choice, as I can render the same scene using WebGL, canvas, or SVG.
Ideally, I am wanting to use backbone.js to provide a nice MVC layer and avoid some tediousness of writing the ajax, but before I get to far with it, I was wondering if anyone had any experience/tips/words of advice in trying to make that work.
Assuming canvas or WebGL, It seems like the backbone.view could be pretty easily abstracted to support a three.js model. The render function is meant to be overridden. I could attach a simple listener on the canvas and then us some three.js trickery to pull out the specific model for firing off events (which seems like it would be the most difficult task). Backbone models and collections would work just fine with my API (I think). The Controllers would probably be a bit more difficult, but could possibly even be used by saving the position of the camera or something similar.
With SVG rendering, this is obviously simplified with all the elements being in the DOM, but I question if SVG would even be a good option when there are 1,000+ objects in the scene. Anyone have experience with large scene graphs in SVG?
Is there other libraries, either for rendering or similar to backbone, that would be a better route to take? I am open to suggestion on the matter.
Your estimation of how you would use Backbone is pretty right-on, and there's even an added bonus, I think. You mentioned something about using "three.js trickery to pull out the specific model for firing off events (which seems like it would be the most difficult task)" - not sure if I just am getting confused by the use of model, but when a view render is triggered, the collection/model it is bound to is passed to that render method - there would be no need for a lookup. And through Underscore's _.bindAll(), you can bind a render method (or any method on the view, really) to any event generated by the collection _.bindAll() is executed on. AND you can trigger all your own custom events off said model/collection. The possibilities are pretty limitless due to this. And yes, a render method could be anything, so interaction with three.js in that space should be perfect. That's a lot of "and"s!
What you want to do is definitely possible, sounds really fun, and is definitely a great application for Backbone.

Categories