I have previously done a project of building a 3D environment that allow user to draw a shape and then extrude it into a 3D geometry (e.g. a circle into a cylinder) using a javascript library Three.js with a function called ExtrudeGeometry().
currently I am building a similar program but on mobile platform using Xamarin and monogame/Xna. Being able to extrude a shape into 3D geometry does make many thing much more easier, however, so far I haven't found a similar function that provides similar functionality. Is there a counterpart of ExtrudeGeometry in Three.js in monogame/Xna? or an alternative way to accomplish same result.
Related
I am trying to write a layout extension and have already looked at the examples provided both from the existing extensions (e.g. arbor, cola, cose-bilkent, etc.) and the scaffolding here. The place where I am hung up is with the webGL renderer. In all of the examples, this is handled by the core (for canvas), if I am not mistaken. Is it possible to use a webGL renderer through three.js? If so, is it as simple as just creating/attaching the required webGL elements in the extension (e.g. scene, cameras, light-sources, etc.)?
The reason for the webGL push is that I want to implement a 3D adjacency matrix (I cannot remember where I found the paper, but someone had implemented this in a desktop application with subject, predicate, and object being the X,Y, and Z axes) and don't see another way to do that efficiently for large result sets on the order of 10-25K nodes/edges.
Cytoscape supports multiple renderers, but it does not support 3D co-ordinates. Positions are defined as (x, y).
You could add a 3D renderer if you like, but you'd have to use a data property for the z position, because the position object doesn't support z.
It's a lot of work to write a renderer, especially one that's performant and fully-featured. If you were to write a 2D renderer, you could reuse all the existing hittests, gestures/interaction, events, etc -- so you could focus on drawing tech. To write a 3D renderer, you'll have to do all the rendering logic from scratch.
If your data requires 3D (e.g. representing 3D atomic bonds or 3D protein structures), then writing a 3D renderer may be a good idea. If it's just to have a neat 3D effect, it's probably not worth it -- as 3D is much harder for users to navigate and understand.
I wonder if there is an easy way to do perspective transformation/distort/skewing to a simple plane (or even 3D object) in three.js? Preferably the mouse controls?
Something like in this example: jsfiddle.net/rjw57/A6Pgy/ except in three.js - fiddle is built directly in WebGL.
Otherwise, any idea how to reach the similar result?
I have created multiple 3D Cubes in a HTML5 Canvas .
I was trying to handle a click event on the 3D cube so that i can know which cube got clicked.
To create the cube I used processingJS.
It worked well but was not able to get the Click position.
I read about Paper JS which creates a shape and stores it in a object.
Is it possible to create 3D things with Paper JS.
Or Is there anyway i can get which cube got clicked through ProcessingJS.
please share whether there are any other ways to do this.
Thanks in advance.
Paper.js deals with 2D vector graphics.
While in theory you can represent a cube if you want to, using skewed squares for example, it will take a lot of effort and a lot of time to just create 1 cube.
You are much better of using a 3D library, e.g - Three.js.
Here is an already cooked up example using raycasting to detect the clicks on a side of the cube: http://mrdoob.github.io/three.js/examples/canvas_interactive_cubes.html
I am supposed to draw a transparent sphere which can be rotated by mouse dragging. I tried to start learning the basis of WebGl but I have not found any appropriate sources. Can anybody help me how to start learning WebGl and also how to draw 3D shapes such as Sphere or tetrahedron and how I can spin it in all dimensions ?
Thanks.
I'd recommend you use three.js but if you really want to learn WebGL start with http://webglfundamentals.org ;)
I would suggest using a framework like three.js that abstracts webGL's functionality, it makes doing things like this a breeze. Check out threejs.org and http://learningthreejs.com/
I'm trying to build a camera calibration function for a three.js app and could very much do with some help.
I have a geometry in a 3d scene, lets say for this example it is a cube. As this object and scene exist in 3d, all properties of the cube are known.
I also have a physical copy of the geometry.
What I would like to is take a photo of the physical geometry and mark 4+ points on the image in x and y. These points correspond to 4+ points in the 3d geometry.
Using these point correlations, I would like to be able to work out the orientation of the camera to the geometry in the photo and then match a virtual camera to the 3d geometry in the three.js scene.
I looked into the possibility of using an AR lib such as JS-aruco or JSARToolkit. But these systems need a marker, where as my system need to be marker less. The user will choose the 4 (or more) points on the image.
I've been doing some research and identified that Tsai's algorithm for camera alignment should suit my needs.
While I have a good knowledge of javascript and three.js, My linear algebra isn't the best and hence am having a problem translating the algorithm into javascript.
If anyone could give me some pointers, or is able to explain the process of Tsai's algorithm in a javascript mannor I would be so super ultra thankful.