say you have a simple raycaster. When you mouse over, the model will light up.
However, in this implementation, this model would be broken into parts, which would be parts of the model but still their own separate "models". For example, say your model happened to be a car. When you mouse over the hood, it lights up. When you mouse over the door, it lights up, etc.
I did not find any such instance of what I was speaking of in the threejs examples.
is there a way to break a full .obj model into several, individual but connected, models in three.js?
I have been working with threejs for quite a time but I do not think that this is possible, at least not with threejs itself (maybe WebGL has some tools which would help you to achieve this, but still, if it is a complex model such as car, the result would still be pretty terrible). But there are several workarounds.
In treejs, create your model from multiple smaller ones. For simple objects this is possible (example: instead of sphere, create two hemispheres and place them next to each other).
Instead of using raycaster, use point light. This will cast the light on the area not the object, therefore if you target one big object, you will end up lighting up only the part of the object, based on the intensity and distance of the point light.
If you have complex model such as car. Load it with some 3D modelling program (Blender, ...) and break it into smaller ones and then save each of them separatelly. And in your threejs code, load each one separatelly and position them the way it will look like single object. (I guess this is the only reasonable way in this case)
Related
I want to detect when the user in my ThreeJS is looking at one of the animation models. Currently I am using this code:
ourRaycaster.setFromCamera(g_Mouse, g_ThreeJsCamera);
g_AryObjectsLookedAt = ourRaycaster.intersectObjects(g_ThreeJsScene.children, true);
The problem is that although the raycaster will detect any collisions between the current camera line of sight and a child object in the animation models group, it will not intersect with any empty spaces in the group. Although this makes sense from a technical point of view, it doesn't match what we do as humans when looking at an object. When we look at an object, we mentally draw a shape around the overall object and if we are looking inside that pseudo-shape we feel that we are looking at the object.
For example, if you have a big head and wide shoulders, but a very thin neck, if I focus momentarily at the space alongside your neck I still feel that I am looking at "you".
What I need is a raycaster that when it passes through an animation model's group of children, does approximately the same thing.
Here is a concrete example. I am using the sample robot animation model from the ThreeJS examples. It has a really big head. When I do the raycast from this distance, no part of the robot is in the intersecting objects list:
But if I get right up "in its face", it is detected (it's "face" is detected):
This is technically accurate but I need something a lot more forgiving so I can create proper logic for interactions between animation model and the user. How can I do this without having to iterate manually with every animation model in the virtual scene?
I have a library that renders models of specific formats.
These formats have animations, and constantly changing things like dynamic mesh colors (aka vertex color per mesh), meshes dynamically being visible or not, and many other things, all depending on the animations.
All was fine, and then I wanted to make the rendering actually fast and switched to instanced rendering.
Since then everything became a nightmare, and no matter how many designs I try, I simply cannot get it to work properly.
The code works in such a way that every mesh in every model owns a bucket object, which holds shared buffers for many instances of this mesh.
For example, every instance needs the mentioned dynamic vertex color, so the bucket holds one buffer with enough memory for some N vertex colors.
When a new instance is added, a free index into this buffer is given to it.
Every instance writes to this index as if it's a C pointer, and at the end the whole buffer is updated with the new values.
So far so good.
When an instance is removed, there's a hole in the buffer.
What I did was to "move" the last instance to the one that got deleted, essentially the same as changing pointers.
This isn't so nice, but it's fast.
Then the issues came in two ways - textures, and cameras.
Let's say I want one instance to use a different texture, this is impossible with instanced rendering directly.
So now it's split. A mesh holds an array of mesh "views", and each of these views hold the buckets.
A view can define its own texture overrides, and so every view is an additional N render calls.
This is so-so, but I couldn't think of anything else.
And finally, the actual problem I am now totally stuck on - cameras.
I want to be able to define the camera each instance uses.
It looked as if I could just add a matrix as an attribute, but that doesn't allow to change the viewport, which is kind of important.
I cannot for the life of me think of any way that actually works, that allows all of the above. Switching cameras, switching textures, and instanced rendering.
To make matters worse, I also can't think of any nice way to cull instances.
The only real way I can see to really support everything, is to dynamically keep re-creating buffers for each render call, and copying tons of data each frame, both on the CPU side and GPU side.
For example - sort all of the instances per-camera into different JS arrays (culling can be done here), create a typed array for each one, copy all of the instance data from all of the instances into this array, bind it to a buffer, and redo this for every group, for every "view", for every frame.
This is basically OOP rendering, switched to instanced rendering right at the final step, and I doubt it will be any kind of fast...
tl;dr instanced rendering is a hell to actually use. Are there any known proper techniques to use it with different textures/cameras/whatnot?
I am building a relatively simple Three.js Application.
Outline: You move with your camera along a path through a "world". The world is all made up from simple Sprites with SpriteMaterials with transparent textures on it. The textures are basically GIF images with alpha transparency.
The whole application works fine and also the performance is quite good. I reduced the camera depth as low as possible so objects are only rendered quite close.
My problem is: i have many different "objects" (all sprites) with different many textures. I reuse the Textures/Materials reference for the same type of elements that are used multiple times in the scene (like trees and rocks).
Still, i'm getting to the point where memory usage is going up too much (above 2GB) due to all the textures used.
Now, when moving through the world, not all objects are visible/displayed from the beginning, even though i add all the sprites to the scene from the very start. Checking the console, the objects not visible and its textures are only loaded when "moving" further into the world when new elements actually are visible in the frustum. The, also the memory usage goes gradually up and up.
I cannot really us "object pooling" for building the world due to its "layout" lets say.
To test, i added a function that removes objects from the scene and disposes their material.map as soon as the camera passed by. Sth like
this.env_sprites[i].material.map.dispose();
this.env_sprites[i].material.dispose();
this.env_sprites[i].geometry.dispose();
this.scene.remove(this.env_sprites[i]);
this.env_sprites.splice(i,1);
This works for the garbage collection and frees up memory again. My problem is then, when moving backwards with the camera, the Sprites would need to be readded to the scene and the materials/texture loaded again, which is quite heavy for performance and does not seem the right approach to me.
Is there a known technique on how to deal with such a setup in regards to memory management and "removing" and adding objects and textures again (in the same place)?
I hope i could explain the issue well enough.
This is how the "World" looks like to give you an impression:
Each sprite individually wastes a ton of memory as your sprite imagery will probably rarely be square and because of that there will be wasted space in each sprite, plus the mipmaps which are also needed for each sprite take a lot of space. Also, on lower end devices you might hit a limit on the total number of textures you can use -- but that may or may not be a problem for you.
In any case, the best way to limit the GPU memory usage with so many distinct sprites is to use a texture atlas. That means you have at most perhaps a handful of textures, each texture contains many of your sprites and you use distinct UV coordinates within the textures for each sprite. Even then you may still end up wasting memory over time due to defragmentation in the allocation and deallocation of sprites, but you would be running out of memory far less quickly. If you use texture atlases you might even be able to load all sprites at the same time without having to deallocate them.
If you want to try and tackle the problem programmatically, there's a library I wrote to manage sprite texture atlases dynamically. I use it to render text myself, but you can use any canvas functions to create your images. It currently uses a basic Knapsack algorithm (which is replaceable) to manage allocation of sprites across the textures. In my case it means I need only 2 1024x1024 textures rather than 130 or so individual sprite textures of wildly varying sizes and that really saves a lot of GPU memory.
Note that for offline use there are probably better tools out there that can generate a texture atlas and generate UV coordinates for each sprite, though it should be possible to use node.js together with my library to create the textures and UV data offline which can then be used in a three.js scene online. That would be an exercise left to the reader though.
Something thing you can also try is to pool all your most common sprites in your "main" texture atlases which are always loaded; load and unload the less commonly used ones on the fly.
I'm trying to render a bunch of trees that are all different sizes composed of two cubes (one for the leaves and one for the trunk).
If I create a vertex buffer for each tree, bind the buffer, and make a call to draw for each individual tree, the program will slow down immensely.
I faced a similar problem for terrain and solved it as follows:
Generate all the terrain at the beginning, place all vertices in a huge position buffer, and send the entire buffer to be drawn at once. This made the program run way faster than when I was sending each triangle of the terrain individually.
However, this process can't really be applied to the trees, because the number of trees is continuously changing (some die and some are born in each turn. They also grow in size as they get older, from a sapling to a huge oak).
This means I would have to remake the huge position buffer each frame, which is impractical.
Is there a way I could use the same position buffer for each tree, but just scale it differently? Maybe in the shader (I'm woefully confused by the code in the shader)?
If such a thing is possible, could it also be used to translate and rotate the same shape many times? If so, then I could apply that to how my terrain is rendered, making terraforming possible (if I tried to do that now then I'd have to remake the terrain vertex position buffer each frame terraforming is taking place, which is way too slow).
One constraint is that I do not want to use three.js. I rewrote my whole program using three.js and it was much, much slower, and much uglier since the way I was previously using textures couldn't be used with three.js. I would really appreciate an answer that sticks with just the baseline webgl and javascript, since it leaves me with more control over optimizing my final code.
I am working on a 3D environment in threejs and am running into some lagging/performance issues because I am loading too many images in my scene. I am loading about 300 500x500 .pngs (I know thats a lot) and rendering them all at once. Naturally it would seem that I would get much higher performance if didn't render all of the images in the scene at once, but rather, only when my character/camera were within a certain distance to an object that requires an image to be used in its material. My question is what method would be best to use?
Load all of the images and map them to materials that I save in an array. Then programmatically apply the correct materials to objects that I am close to from that array and apply default (inexpensive) materials to objects that I am far away from.
Use a relatively small maximum camera depth in association with fog to prevent the renderer from having to render all of the images at once.
My main goal is to create a system that allows for all of the images to be viewed when the character is near them while also being as inexpensive on the user's computer as possible.
Perhaps there is a better method then the two that I have suggested?
Thanks!