Apply three.js subdivision modifier without changing outer geometry? - javascript

I am trying to take any three.js geometry and subdivide its existing faces into smaller faces. This would essentially give the geometry a higher "resolution". There is a subdivision modifier tool in the examples of three.js that works great for what I'm trying to do, but it ends up changing and morphing the original shape of the geometry. I'd like to retain the original shape.
View the Subdivision Modifier Example
Example of how the current subdivision modifier behaves:
Rough example of how I'd like it to behave:
The subdivision modifier is applied like this:
let originalGeometry = new THREE.BoxGeometry(1, 1, 1);
let subdivisionModifier = new THREE.SubdivisionModifier(3);
let subdividedGeometry = originalGeometry.clone();
subdivisionModifier.modify(subdividedGeometry);
I attempted to dig around the source of the subdivision modifier, but I wasn't sure how to modify it to get the desired result.
Note: The subdivision should be able to be applied to any geometry. My example of the desired result might make it seem that a three.js PlaneGeometry with increased segments would work, but I need this to be applied to a variety of geometries.

Based on the suggestions in the comments by TheJim01, I was able to dig through the original source and modify the vertex weight, edge weight, and beta values to retain the original shape. My modifications should remove any averaging, and put all the weight toward the source shape.
There were three sections that had to be modified, so I went ahead and made it an option that can be passed into the constructor called retainShape, which defaults to false.
I made a gist with the modified code for SubdivisionGeometry.js.
View the modified SubdivisionGeometry.js Gist
Below is an example of a cube being subdivided with the option turned off, and turned on.
Left: new THREE.SubdivisionModifier(2, false);
Right: new THREE.SubdivisionModifier(2, true);
If anyone runs into any issues with this or has any questions, let me know!

The current version of three.js has optional parameters for PlaneGeometry that specify the number of segments for the width and height; both default to 1. In the example below I set both widthSegments and heightSegments to 128. This has a similar effect as using SubdivisionModifier. In fact, SubdivisionModifier distorts the shape, but specifying the segments does not distort the shape and works better for me.
var widthSegments = 128;
var heightSegments = 128;
var geometry = new THREE.PlaneGeometry(10, 10, widthSegments, heightSegments);
// var geometry = new THREE.PlaneGeoemtry(10,10); // segments default to 1
// var modifier = new THREE.SubdivisionModifier( 7 );
// geometry = modifier.modify(geometry);
https://threejs.org/docs/#api/en/geometries/PlaneGeometry

Related

Three.js detect when object is partially and fully occluded

I'm trying to detect when an object in Three.js is partially and fully occluded (hidden behind) another object.
My current simple solution casts a single ray to the the center of the object:
function getScreenPos(object) {
var pos = object.position.clone();
camera.updateMatrixWorld();
pos.project(camera);
return new THREE.Vector2(pos.x, pos.y);
}
function isOccluded(object) {
raycaster.setFromCamera(getScreenPos(object), camera);
var intersects = raycaster.intersectObjects(scene.children);
if (intersects[0] && intersects[0].object === object) {
return false;
} else {
return true;
}
}
However it doesn't account for the object's dimensions (width, height, depth).
Not occluded (because center of object is not behind)
Occluded (because center of object is behind)
View working demo:
https://jsfiddle.net/kmturley/nb9f5gho/57/
Currently thinking I could calculate the object box size, and cast Rays for each corner of the box. But this might still be a little too simple:
var box = new THREE.Box3().setFromObject(object);
var size = box.getSize();
I would like to find a more robust approach which could give partially occluded and fully occluded booleans values or maybe even percentage occluded?
Search Stack Overflow and the Three.js examples for "GPU picking." The concept can be broken down into three basic steps:
Change the material of each shape to a unique flat (MeshBasicMaterial) color.
Render the scene with the unique materials.
Read the pixels of the rendered frame to collect color information.
Your scenario allows you a few caveats.
Give only the shape you're testing a unique color--everything else can be black.
You don't need to render the full scene to test one shape. You could adjust your viewport to render only the area surrounding the shape in question.
Because you only gave a color only to your test part, the rest of the data should be zeroes, making finding pixels matching your unique color much easier.
Now that you have the pixel data, you can determine the following:
If NO pixels matchthe unique color, then the shape is fully occluded.
If SOME pixels match the unique color, then the shape is at least partially visible.
The second bullet says that the shape is "at least partially" visible. This is because you can't test for full visibility with the information you currently have.
What I would do (and someone else might have a better solution) is render the same viewport a second time, but only have the test shape visible, which is the equivalent of the part being fully visible. With this information in hand, compare the pixels against the first render. If both have the same number (perhaps within a tolerance) of pixels of the unique color, then you can say the part is fully visible/not occluded.
I managed to get a working version for WebGL1 based on TheJim01's answer!
First create a second simpler scene to use for calculations:
pickingScene = new THREE.Scene();
pickingTextureOcclusion = new THREE.WebGLRenderTarget(window.innerWidth / 2, window.innerHeight / 2);
pickingMaterial = new THREE.MeshBasicMaterial({ vertexColors: THREE.VertexColors });
pickingScene.add(new THREE.Mesh(BufferGeometryUtils.mergeBufferGeometries([
createBuffer(geometry, mesh),
createBuffer(geometry2, mesh2)
]), pickingMaterial));
Recreate your objects as Buffer Geometry (faster for performance):
function createBuffer(geometry, mesh) {
var buffer = new THREE.SphereBufferGeometry(geometry.parameters.radius, geometry.parameters.widthSegments, geometry.parameters.heightSegments);
quaternion.setFromEuler(mesh.rotation);
matrix.compose(mesh.position, quaternion, mesh.scale);
buffer.applyMatrix4(matrix);
applyVertexColors(buffer, color.setHex(mesh.name));
return buffer;
}
Add a color based on the mesh.name e.g. an id 1, 2, 3, etc
function applyVertexColors(geometry, color) {
var position = geometry.attributes.position;
var colors = [];
for (var i = 0; i < position.count; i ++) {
colors.push(color.r, color.g, color.b);
}
geometry.setAttribute('color', new THREE.Float32BufferAttribute(colors, 3));
}
Then during the render loop check the second scene for that texture, and match pixel data to the mesh name:
function isOccludedBuffer(object) {
renderer.setRenderTarget(pickingTextureOcclusion);
renderer.render(pickingScene, camera);
var pixelBuffer = new Uint8Array(window.innerWidth * window.innerHeight);
renderer.readRenderTargetPixels(pickingTextureOcclusion, 0, 0, window.innerWidth / 2, window.innerHeight / 2, pixelBuffer);
renderer.setRenderTarget(null);
return !pixelBuffer.includes(object.name);
}
You can view the WebGL1 working demo here:
https://jsfiddle.net/kmturley/nb9f5gho/62/
One caveat to note with this approach is that your picking scene needs to stay up-to-date with changes in your main scene. So if your objects move position/rotation etc, they need to be updated in the picking scene too. In my example the camera is moving, not the objects so it doesn't need updating.
For WebGL2 we will have a better solution:
https://tsherif.github.io/webgl2examples/occlusion.html
But this is not supported in all browsers yet:
https://www.caniuse.com/#search=webgl

Parallax effect using three.js

I would like to build a parallax effect from a 2D image using a depth map, similar to this, or this but using three.js.
Question is, where should I start with? Using just a PlaneGeometry with a MeshStandardMaterial renders my 2D image without parallax occlusion. Once I add my depth map as displacementMap property I can see some sort of displacement, but it is very low-res. (Maybe, since displacement maps are not meant to be used for this?)
My first attempt
import * as THREE from "three";
import image from "./Resources/Images/image.jpg";
import depth from "./Resources/Images/depth.jpg";
[...]
const geometry = new THREE.PlaneGeometry(200, 200, 10, 10);
const material = new THREE.MeshStandardMaterial();
const spriteMap = new THREE.TextureLoader().load(image);
const depthMap = new THREE.TextureLoader().load(depth);
material.map = spriteMap;
material.displacementMap = depthMap;
material.displacementScale = 20;
const plane = new THREE.Mesh(geometry, material);
Or should I use a Sprite object, which face always points to the camera? But how to apply the depth map to it then?
I've set up a codesandbox with what I've got so far. It also contains event listener for mouse movement and rotates the camera on movement as it is work in progress.
Update 1
So I figured out, that I seem to need a custom ShaderMaterial for this. After looking at pixijs's implementation I've found out, that it is based on a custom shader.
Since I have access to the source, all I need to do is rewrite it to be compatible with threejs. But the big question is: HOW
Would be awesome if someone could point me into the right direction, thanks!

How to create a texture from multiple graphics

I'm new to PixiJS and I'm trying something simple like a painting application.
I'm having difficulty trying to capture a collection of shapes as a single grouping. I'm not interested in working code for this as I'd like to figure that out on my own; I'm simply interested in knowing whether I'm on the right track or if I need to explore some other PixiJS concepts to get what I need.
I have one canvas in which I can drag shapes such as rectangles, ellipse, and lines. These "strokes" are being stored as individual Graphics objects, for instance:
var shape = new PIXI.Graphics();
shape.position.set(...);
...
shape.lineStyle(...)
.beginFill(...)
.drawRect(...)
.endFill();
...
stage.addChild(shape);
...
renderer.render(stage);
I'm also holding onto these shapes in an array:
shapes.push(shape);
Now that I have these displayed as well as have the order of the strokes available, I'd like to be able to capture them somehow. Imagine maybe taking the drawing and saving it, or perhaps using it as a thumbnail in a gallery, or simply just storing it on the back-end in a database, preferably keeping all the raw strokes so that they can be scaled up or down as desired.
For now, I'm simply trying to take this collection of strokes and display them again by holding them, clearing the graphics from my canvas, and then plopping down what I have held.
Looking at this example, I've been able to get a texture that I can reliably reproduce wherever I click with the mouse:
http://jsfiddle.net/gzh14bcn/
This means I've been able to take the first part that creates the texture object, and I tweaked the second part to create and display the sprites when I click the mouse.
When I try to replace this example code with my own code to create the texture itself, I can't get that part to work.
So this example snippet works fine when I try to create a sprite from it:
var texture = new PIXI.RenderTexture(renderer, 16, 16);
var graphics = new PIXI.Graphics();
graphics.beginFill(0x44FFFF);
graphics.drawCircle(8, 8, 8);
graphics.endFill();
texture.render(graphics);
FYI to create sprites:
var sprite = new PIXI.Sprite(texture);
sprite.position.set(xPos, yPos);
stage.addChild(sprite);
Since I have my shapes in the shapes array or on the stage, what is the preferred way I proceed to capture this as a single grouping from which I can create one or more sprites?
So basicaly you've got how to make some PIXI.Graphics shape
var pixiRect = new PIXI.Graphics();
pixiRect.lineStyle(..);
pixiRect.beginFill(..);
pixiRect.drawRect(..);
pixiRect.endFill(..);
(You can draw as many rects/circles/shapes as you want into one PIXI.Graphics)
But to convert it to texture you must tell renderer to create it
var texture = renderer.generateTexture(pixiRect);
Then you can easily create PIXI.Sprite from this texture
var spr = new PIXI.Sprite(texture);
And the last thing is to add it to your stage or array, but you can also make some empty PIXI.Container and then addChild to that and you've got your array
option - add sprite (created from graphics) to stage
stage.addChild(spr);
option - push it to your array
shapes.push(spr);
option - if you have var shapes = new PIXI.Container(); you can make a container for your sprites
shapes.addChild(spr);
Working example : https://jsfiddle.net/co7Lrbq1/3/
EDIT:
to position your canvas above you have to addChild it later, it means first addChild has zIndex = 0 and every addChild adds a layer on top of last
I figured it out. My stage is a container:
var stage = new PIXI.Container();
var canvas = new PIXI.Graphics();
canvas.lineStyle(4, 0xffffff, 1);
canvas.beginFill(0xffffff);
canvas.drawRect(canvasStartX, canvasStartY, 500, 600);
canvas.endFill();
stage.addChild(canvas);
I changed this to the following:
var canvas = new PIXI.Container();
var canvasRect = new PIXI.Graphics();
canvasRect.lineStyle(4, 0xffffff, 1);
canvasRect.beginFill(0xffffff);
canvasRect.drawRect(canvasStartX, canvasStartY, 500, 600);
canvasRect.endFill();
canvas.addChild(canvasRect);
stage.addChild(canvas);
Then, I replaced stage with canvas where appropriate and canvas with canvasRect where appropriate.
Finally, I got my texture with:
var texture = canvas.generateTexture(renderer);
At the moment, this grabbed the entire width/height of the stage, but I think I just need to tweak a bit on how I create my canvas above and I should be fine.

How to change a scale in three js (through assignment)

I have two object in threejs. And I would like them to share the value of scale vector
mesh1 = new THREE.Mesh(geometry, material);
mesh1.scale.x = 0.47;
mesh2 = new THREE.Mesh(geometry, material);
mesh2.scale=mesh1.scale; // This does not work
The last line has no effect. The documentation does not state that the scale property is readonly. I've taken a look at the source and found that that property is not defined as writable. Is there a bug in documentation or is this the way threejs works and there is no point in documenting it :-) ?
Is it possible to share scale (and other vector) between different meshes? Is the only way to do it by copying the values
mesh2.scale.copy(mesh1.scale); // Copy the vector over
UPDATE: This seemed to work in old versions of threejs - such as the one used in the following example. Was this functionality disabled on purpose?
Object3D's position, rotation, quaternion and scale properties are immutable.
See the source code file Object3D.js.
For example, you can no longer use the following pattern:
object.scale = vector;
Instead, you must use either
object.scale.set( x, y, z );
or
object.scale.copy( vector );
Similarly for the other properties mentioned.
three.js r.72

How do I 'wrap' a plane over a sphere with three.js?

I am relatively new to three.js and am trying to position and manipulate a plane object to have the effect of laying over the surface of a sphere object (or any for that matter), so that the plane takes the form of the object surface. The intention is to be able to move the plane on the surface later on.
I position the plane in front of the sphere and index through the plane's vertices casting a ray towards the sphere to detect the intersection with the sphere. I then try to change the z position of said vertices, but it does not achieve the desired result. Can anyone give me some guidance on how to get this working, or indeed suggest another method?
This is how I attempt to change the vertices (with an offset of 1 to be visible 'on' the sphere surface);
planeMesh.geometry.vertices[vertexIndex].z = collisionResults[0].distance - 1;
Making sure to set the following before rendering;
planeMesh.geometry.verticesNeedUpdate = true;
planeMesh.geometry.normalsNeedUpdate = true;
I have a fiddle that shows where I am, here I cast my rays in z and I do not get intersections (collisions) with the sphere, and cannot change the plane in the manner I wish.
http://jsfiddle.net/stokewoggle/vuezL/
You can rotate the camera around the scene with the left and right arrows (in chrome anyway) to see the shape of the plane. I have made the sphere see through as I find it useful to see the plane better.
EDIT: Updated fiddle and corrected description mistake.
Sorry for the delay, but it took me a couple of days to figure this one out. The reason why the collisions were not working was because (like we had suspected) the planeMesh vertices are in local space, which is essentially the same as starting in the center of the sphere and not what you're expecting. At first, I thought a quick-fix would be to apply the worldMatrix like stemkoski did on his github three.js collision example I linked to, but that didn't end up working either because the plane itself is defined in x and y coordinates, up and down, left and right - but no z information (depth) is made locally when you create a flat 2D planeMesh.
What ended up working is manually setting the z component of each vertex of the plane. You had originaly wanted the plane to be at z = 201, so I just moved that code inside the loop that goes through each vertex and I manually set each vertex to z = 201; Now, all the ray start-positions were correct (globally) and having a ray direction of (0,0,-1) resulted in correct collisions.
var localVertex = planeMesh.geometry.vertices[vertexIndex].clone();
localVertex.z = 201;
One more thing was in order to make the plane-wrap absolutely perfect in shape, instead of using (0,0,-1) as each ray direction, I manually calculated each ray direction by subtracting each vertex from the sphere's center position location and normalizing the resulting vector. Now, the collisionResult intersection point will be even better.
var directionVector = new THREE.Vector3();
directionVector.subVectors(sphereMesh.position, localVertex);
directionVector.normalize();
var ray = new THREE.Raycaster(localVertex, directionVector);
Here is a working example:
http://jsfiddle.net/FLyaY/1/
As you can see, the planeMesh fits snugly on the sphere, kind of like a patch or a band-aid. :)
Hope this helps. Thanks for posting the question on three.js's github page - I wouldn't have seen it here. At first I thought it was a bug in THREE.Raycaster but in the end it was just user (mine) error. I learned a lot about collision code from working on this problem and I will be using it later down the line in my own 3D game projects. You can check out one of my games at: https://github.com/erichlof/SpacePong3D
Best of luck to you!
-Erich
Your ray start position is not good. Probably due to vertex coordinates being local to the plane. You start the raycast from inside the sphere so it never hits anything.
I changed the ray start position like this as a test and get 726 collisions:
var rayStart = new THREE.Vector3(0, 0, 500);
var ray = new THREE.Raycaster(rayStart, new THREE.Vector3(0, 0, -1));
Forked jsfiddle: http://jsfiddle.net/H5YSL/
I think you need to transform the vertex coordinates to world coordinates to get the position correctly. That should be easy to figure out from docs and examples.

Categories