Hi I want to render an interactive 3D sphere in browser. The texture on it will be of a world map, so basically I am trying to create a globe which is rotatable in any direction using map. I am comfortable in rendering 2D images using SVG but not sure how to render 3D shapes in SVG.
Is it possible to render a 3D shape in SVG, if yes, how? If not, is WebGl a better option?
Have a look at three.js which abstracts the implementation a bit (comes with WebGL/SVG/Canvas backends).
SVG is a 2d vector graphics format, but you can project 3d shapes onto 2d, so it's possible to render 3d objects with SVG, it's just a bit of work (best left to javascript libraries).
WebGL is your best bet because of performance. You might be able to leverage (or at least learn from) demos like http://www.chromeexperiments.com/globe (see http://data-arts.appspot.com/globe-search). There are also other globe demos at http://www.chromeexperiments.com.
If you use SVG then shading is going to be a problem. Proper shading is not really possible in SVG, though you might be able to fake it a few select circumstances. For 3D definitely use WebGL if you have more than a dozen or so polygons in the model.
You must transform all point with a projection
USe this to change point2D(x,y) in point3D(x,y,z):
// Language Javascript
// object Point
function Point(x,y,z){
this.x = x;
this.y = y;
this.z = z;
}
// Projection convert point 2D in 3D
function ProjectionPoint(point){
if ( !(point instanceof Point) )
throw new TypeError("ProjectionPoint: incorrect type parameter");
return { x: (point.x<<8)/(point.z+Zorig)+Xorig,
y: (point.y<<8)/(point.z+Zorig)+Yorig,
z:point.z }
}
Make sure, you have defined your origine point under the variable Xorig, Yorig, Zorig
Related
I am having 2D design in microstation and I wanted to represent this design in web using any tool(javascript/Unity 3D or any other) where the web tool will not have all the functionality but basic functionality like reshaping or adding a new shape should be available.
As of now, my approach is once I created a design in microstation then I am capturing properties of shapes like the cordinates of a line and now using these coordinates I wanted to represent in the browser since this is a 2D design so it will be plotted in some location (x,y) for example I have created a line in microstation from (2,2) to (10,10) so it will be a straight line and I have all the coordinates I tried redrawing it in Unity which am able to do but I am facing issue to change the length from (2,2) to (20,20) by mouse click. And my goal is to do it in runtime, not in Unity editor tool.
This is an example of a straight line I wanted to do it for all geometric shape,any guidance would be appreciated.
As of now am trying Unity to do so but struggling in the edit part is there a way to achieve this in unity?
I also looked at various javascript libraries like konvaJS, makerJS, ThreeJS, etc. but except konvajs none of the other library provide facilities like reshaping, in Konva also creating shape using a mouse not found any solution for this.
Can we achieve this by any of the two approaches, of course, am not looking for all functionality only a few custom functionality, if yes which approach will be the best, and which tool should I proceed with?
Any guidance will be helpful.
To draw a line-segment, you can use LineRenderer.
//two points of the line-segment are known (or got from the Transform of GameObject)
Vector3 start;
Vector3 end;
GameObject myLine = new GameObject();
myLine.transform.position = start;
myLine.AddComponent<LineRenderer>();
LineRenderer lr = myLine.GetComponent<LineRenderer>();
lr.material = new Material(Shader.Find("Particles/Alpha Blended Premultiply"));
lr.SetColors(color, color);
lr.SetWidth(0.1f, 0.1f);
lr.SetPosition(0, start);
lr.SetPosition(1, end);
//to change the points of this line
myLine.transform.position = another_start;
lr.SetPosition(0, another_start);
lr.SetPosition(1, another_end);
There are also other solutions:
Use scaled cube or capsule primitive.
3rd-party plugins: vectrosity
To get mouse clicked position, use Camera.main.ScreenToWorldPoint(Input.mousePosition).
To determine when your mouse is clicked, use Input.GetMouseButtonUp.
I have a 3D model that was loaded as an obj file into Three.js. The model itself is a furniture.
The problem is, that furniture material is dynamic and is different in size (thickness). I need to have to able to made thickness of material bigger, but the total size of the model can't be changed. So scaling isn't an option.
Is there a way I can resize parts of the model (few specific meshes) and doesn't compromise the structure of mesh itself ? I need to change thickness of the structure, but internal parts of the model shouldn't change.
The only solution I can think of is to change scale of some of the meshes and then to change global position of the other meshes based on that. Is this the right way ?
object.traverse(function(child) {
if (child instanceof THREE.Mesh) {
// resize and reposition some of the meshes
}
});
Possible ways to solve it:
Bones
Deformation
Well, if all of the meshes are separate primitives, then you can just change the scale of each part you want to change along one axis, and just set up anchor points to constrain to the outside. So for pieces on the border, you scale the empty object that they're attached to so that they maintain the outer shell.
EG:
OOOOOO
OMMMMMMO
OMmmmmMO
OMmmmmMO
OMMMMMMO
OOOOOO
where O is an Object3D carrying the adjacent Mesh-M, and the m's represent meshes that are scaled themselves. This way if you adjust the scale of all 'm's and 'O's, the outer shell stays in place,
But you're on the right track with the traversal. You'll just have to do this.
For an easy way to traverse, I would give everything you want to change some attribute in their .userData object. Because in some cases you'll want to scale empty objects (O) (so that you can effectively move the anchor point) whereas at others you'll want to scale the meshes in place (m). So it's not purely a mesh based operation (since meshes want to scale from their center). Doing some tagging makes the traversal simpler:
object.traverse(function(child){
if(child instanceof THREE.Mesh){
if(child.userData.isScalable){
//do the scaling.
}
}
});
and if you set up the heirarchy and .userData tagging correctly, then you just scale things and you keep the outer shell.
Is this what you're asking? because the question is unclear.
You could use Clara.io, it is built on top of ThreeJS and allows for you to run operators on geometry that you setup in Clara.io scenes. There is a thickness operator in Clara.io that you can use.
Documentation here: http://clara.io/learn/sdk/interactive-experiences
Anything you can do in the Clara.io editor you can do in an interactive-embed.
You can use your method of changing different meshes sizes and other positions, but when you use object.scale.set( x, y, z ); the browser has to change the scale of the model for every frame rendered. So if you use this for lots of meshes, it can decrease your game's performance. The best way to go would be to use a 3d editor like Blender. It is easier and more efficient.
I have an Object3D with many levels of children (more Object3Ds or Meshes/Lines). The Box3 class has a setFromObject() method which will compute a bounding box of an object and all of its descendants. This is the behavior I am looking for.
I can't use the setFromObject() method of Box3, however, because I am not using Geometry objects. Instead, the project I'm working on uses BufferGeometry exclusively. BufferGeometry objects do not have a .vertices property, which is what the setFromObject() function looks for when computing a bounding box.
var bbox = new THREE.Box3().setFromObject(object);
console.log(bbox.min); // x, y, and z are all Infinity.
console.log(bbox.max); // x, y, and z are all -Infinity.
I have also been experimenting with using the computeBoundingBox() method of BufferGeometry, but it does not seem to update the bounding box when the geometry is manipulated. I think it might be related to matrixAutoUpdate being false, but I've also tried explicitly calling updateMatrix() to no avail.
Is there a way to compute a bounding box on an Object3D and all of its descendants if using the BufferGeometry class? I'm new to Three.js, so any help would be appreciated!
I am using Three.js r66.
Box3.setFromObject( object ) now supports BufferGeometry.
three.js r.69dev (dev version)
i'm using Three.js (without shaders, only with existing objects methods) in order to realize animations, but my question is very simple : i'm sure it's possible, but can you tell me (or help me) how should i combine several animations on a shape ? For example, rotating and translating a sphere.
When i'm doing :
three.sphere.rotation.y += 0.1;
three.sphere.translateZ += 1;
the sphere rotates but the translation vector is also rotating, so the translation has no effect.
I know a bit openGL and i already have used glPushMatrix and glPopMatrix functions, so do them exist in this framework ?
Cheers
Each three.js object3D has a position, rotation and scale; the rotation (always relative to its origin or "center") defines its own local axis coordinates (say, what the object sees as its own "front,up, right" directions) and when you call translateZ, the object is moved according to those local directions (not along the world -or parent- Z axis). If you want the later, do three.sphere.position.z += 1 instead.
The order of transformation is important. You get a different result if you translate first and then rotate than if you rotate first and then translate. Of course with a sphere it will be hard to see the rotation.
I am using the Three.JS library to display a point cloud in a web brower. The point cloud is generated once at start up and no further points are added or removed. But it does need to be rotated, panned and zoomed. I've gone through the tutorial about creating particles in three.js here
Using the example I can create particles that are squares or use an image of a sphere to create a texture. The image is closer to what I want, but is it possible to generate the point clouds without using the image? The sphere geometry for example.
The problem with the image is that when you have thousands of points it seems they sometimes obscure each other around the edges. From what I can gather it seems like the black region in a point's png file blocks the image immediately behind the current point. (But it is transparent to points further behind)
This obscuring of the images is the reason I would like to generate the points using shapes. I have tried replacing particles = new THREE.Geometry() with THREE.SphereGeometry(radius, segments, rings) and tried to change the vertices to spheres.
So my question is. How do I modify the example code so that it renders spheres (or points) instead of squares? Also, is a particle system the most efficient system for my particular case or should I just generate the particles and set their individual positions? As I mentioned I only generate the points once, but then rotate, zoom, pan the points. (I used the TrackBall sample code to get the mouse events working).
Thanks for your help
I don't think rendering a point cloud with spheres is very efficient. You should be able to get away with a particle system and use a texture or a small canvas program to draw a circle.
One of the first three.js sample uses a canvas program, here are the important bits:
var PI2 = Math.PI * 2;
var program = function ( context )
{
context.beginPath();
context.arc( 0, 0, 1, 0, PI2, true );
context.closePath();
context.fill();
};
var particle = new THREE.Particle( new THREE.ParticleCanvasMaterial( {
color: Math.random() * 0x808008 + 0x808080,
program: program
} ) );
Feel free to adapt the code for the WebGL renderer.
Another clever solution I've seen in the examples is using an encoded webm video to store the data and pass that to a GLSL shader which is rendered through a particle system in three.js
If your point cloud comes from a Kinect, these resources might be useful:
DepthCam
KinectJS
When comparing my code to http://threejs.org/examples/#webgl_custom_attributes_particles3
I saw the only difference was:
vec4 outColor = texture2D( texture, gl_PointCoord );
if ( outColor.a < 0.5 ) discard;
gl_FragColor = outColor;
Added to the fragment shader, fixed this problem for me.
It wasn't z fighting because randomly, some corners would overlap distant particles.
material.alphaTest = 0.5 didn't work and turning off depth writes/tests messed up the viewing order.
The problem with the image is that when you have thousands of points
it seems they sometimes obscure each other around the edges. From what
I can gather it seems like the black region in a point's png file
blocks the image immediately behind the current point. (But it is
transparent to points further behind)
You can get rid of the transparency overlapping problem of the underlying square structure by turning
depthTest:false
The problem then is, if you are adding additional objects to the scene the depth-testing will fail and the PointCloud will be rendered in front of the other objects, ignoring the actual order. To get around that you can additionally turn off
depthWrite:false