THREE.js : Object3D.userData how does it work? - javascript

So I was wondering how the .userData property worked in the Object3D class.
So this is what I understood :
It is an empty object that is not used by any internal logic of threejs, it is a safe way to park data that I want to have assigned to a given object. And that adding custom stuff to the object itself can cause problems in the event that there is a conflict.
But the only problem is I do not know when to use it or how I can implement this in my code? But can someone tell me if what I said is right?

According to the three.js documentation
.userData : Object
An object that can be used to store custom data about the Object3D. It should not hold references to functions as these will not be cloned.
and what is Object3D
This is the base class for most objects in three.js and provides a set of properties and methods for manipulating objects in 3D space.
so Object3D will be the base class for most of the objects in three.js including Cameras, Lights, Mesh, Materials etc.
Whenever you need to store some custom information about that perticular object that you might want to use for displaying or any other calculation, you will store that in .userData property.
A simple example could be that you created a geometry (a BoxGeometry), now you want to keep track of the count how many times user clicked on this geometry. This count can be kept in the .userData property. Later you can use it however you want, either to display it somewhere or do further calculations.

Related

Is There Any Way To Avoid Camera Passing Through the GLtf Objects That loaded Inside Our Scene In ThreeJs

hey there I am working with Threejs and I want to develop an online virtual tour. So I loaded some 3d objects as Booths and a camera with a PointerlockControl and everything works fine but the problem is that the camera will pass through the objects and I think that is not good.is there any way to avoid this issue ?
So the problem is that the camera is sometimes inside an object and you can see this object from inside (which doesn't look good)?
If so, there are two basic solutions:
You can create an objects collision system to detect a camera <=> object collision.
A much simpler solution: just set your objects' material like this: Material.side = THREE.FrontSide, so the camera will be still able to go inside an object, but this object will be not visible from inside (if that's your problem).

Should the raycaster always be located in the render() function?

I need to catch the location of the mouse double-click event and create an object on its place in 3D scene.
As I understood raycaster, located in the render() function updates the location of the mouse continuously. I want it to be done only when the double-click takes place. Does it make sense to put it into the object creation function?
You can put it pretty much where you want it. That's especially important if you're scenes are super complex.. you will find you need to restrict when/how often you raycast, and also control what you're raycasting against.. like just raycasting against a subtree of your scene or against a specific array of objects.

WebGL - advanced rendering with instancing

I have a library that renders models of specific formats.
These formats have animations, and constantly changing things like dynamic mesh colors (aka vertex color per mesh), meshes dynamically being visible or not, and many other things, all depending on the animations.
All was fine, and then I wanted to make the rendering actually fast and switched to instanced rendering.
Since then everything became a nightmare, and no matter how many designs I try, I simply cannot get it to work properly.
The code works in such a way that every mesh in every model owns a bucket object, which holds shared buffers for many instances of this mesh.
For example, every instance needs the mentioned dynamic vertex color, so the bucket holds one buffer with enough memory for some N vertex colors.
When a new instance is added, a free index into this buffer is given to it.
Every instance writes to this index as if it's a C pointer, and at the end the whole buffer is updated with the new values.
So far so good.
When an instance is removed, there's a hole in the buffer.
What I did was to "move" the last instance to the one that got deleted, essentially the same as changing pointers.
This isn't so nice, but it's fast.
Then the issues came in two ways - textures, and cameras.
Let's say I want one instance to use a different texture, this is impossible with instanced rendering directly.
So now it's split. A mesh holds an array of mesh "views", and each of these views hold the buckets.
A view can define its own texture overrides, and so every view is an additional N render calls.
This is so-so, but I couldn't think of anything else.
And finally, the actual problem I am now totally stuck on - cameras.
I want to be able to define the camera each instance uses.
It looked as if I could just add a matrix as an attribute, but that doesn't allow to change the viewport, which is kind of important.
I cannot for the life of me think of any way that actually works, that allows all of the above. Switching cameras, switching textures, and instanced rendering.
To make matters worse, I also can't think of any nice way to cull instances.
The only real way I can see to really support everything, is to dynamically keep re-creating buffers for each render call, and copying tons of data each frame, both on the CPU side and GPU side.
For example - sort all of the instances per-camera into different JS arrays (culling can be done here), create a typed array for each one, copy all of the instance data from all of the instances into this array, bind it to a buffer, and redo this for every group, for every "view", for every frame.
This is basically OOP rendering, switched to instanced rendering right at the final step, and I doubt it will be any kind of fast...
tl;dr instanced rendering is a hell to actually use. Are there any known proper techniques to use it with different textures/cameras/whatnot?

Access to render pipeline in XML3D: Object highlightning

I want to make object to become highlighted when selected in order to do this I need a custom shader that scales that renders the model outline - this part of the task I'm familiar with - XML3D provides a way to implement custom shader.
But the missing piece is having access to render pipeline:
Its impossible to make nice highlighting without copying the model and painting it over old one
or rendering the scene in two passes (postprocessing).
Creating another model copy in the usual way (attaching new element to dom tree) won't solve the issue since I need also control scene blending.
How to I get it done with with xml3d?
Is it possible without digging deep into the library?
In general there are four approaches to implement highlighting:
you can exchange materials back and forth (not very efficient)
you can use material overrides for highlighted objects, which will adapt one or more parameters of the current shading (e.g. emissive color)
you can use a custom shader in combination with an arbitrary uniform attribute which indicates that the object should be highlighted. In the shader, you can adapt color and rendering based on the attribute. E.g. you could do a rim-highlighting or a wireframe rendering. Here is an example for a shader that colors an object if the uniform selected has a specific value.
For instance:
<mesh id="foo">
<data src="mesh-data.json"></data>
<float name="selected">0</float>
</mesh>
To highlight this object: $("#foo float[name=selected]").text("1");
you can adapt the rendering pipeline to render the highlighted object twice and blend it in various ways
If sufficient for your use-case, I would recommend approach 3, as it is not very intrusive. The interface for creating custom rendering pipeline is not yet very stable.
As ksons mentioned the render pipeline interface is undergoing some major changes right now, XML3D 4.8 is the last version that supports it in its current form. Version 5.0 will likely re-introduce it in a (hopefully) much improved form.
We use a custom render pipeline in one of our internal projects to draw a wireframe overlay onto selected models, I've posted a simplified version of this pipeline as a Gist for you to have a look at. Essentially it renders the scene using the standard internal render pass and then does a second pass to draw the highlighted objects in a crude wireframe mode without depth testing.
As I said this works in v4.8, if you need this functionality in v4.9 then please open an issue and I'll see about re-enabling it as a minor release.

Serialization and observing changes in meteor collection?

I have an application(ill make it short :p) - on the client side there is a list of objects, each of these objects has a property like "drawable" which is an fabricjs object instance to canvas. Point is, that I want to make it fully reactive and synchronized - each time someone moves an element on canvas or deletes an item from the list it has to be updated on every other client.
And here is the question - how to make and optimal data structure for such task, currently I am using something like this:
ListObject {
name: 'name',
otherAttributes: 'blahblah',
drawable: 'fabricjsObjectInstance'
}
I dont want to make it serialize/deserialize way.
Some more examples of what I mean:
when object on canvas is moved only its left and top attributes are changed - so I don't want to send and get whole canvas serialization
it is possible to somehow bind an array object in code with minimongo collection?
can I get rid of all 'Models' in js code and use only Meteor Collection with some behavior methods?
I am more than sure that I am doing it wrong or that I don't get meteor reactivnes. My point is not to get answer to every question, because it might be impossible, but to get some ideas or suggestions how to sort this mess out.

Categories