Is there a way to map an image onto the side of a cylinder with threejs? Would I have to map individual portions of the image onto each face on the side of the cylinder, or is there a faster way to do this? Thanks!
For all texture allocation and mapping you probably want to know about UV Mapping which you typically do before importing into Threejs. All 3D modeling programs have that functionality, Blender is the most popular and arguably the best among the free ones.
Related
I need some guidance and help. I am new to three.js and want to make a terrain with given terrestrial map image for texture and given heightmap data.
For this, i have to deal with large scale terrestial map image like 4104x1856
If i create a plane mesh with such a large number of vertices and use height map in those vertices and map terrestian texture in that elevated surface, it becomes so slow.
For this, i created mesh of segments 4104x1856, but i am sure it is very oldschool and not optimum way of doing this.
I have two concern
can i do such a big scale in three.js ?
if yes, what are things that i should do in order to make it not only renderable but also efficiently interactivable.
Thank you in advance !!!
EDIT:
I had a question about exporting to obj and mtl but discovered that I could export from three.js using GLTFExporter.js and had success getting the geometry and texture out of three.js from that.
The issue I'm having with the GLTF Exporter is that I have textures that have offset and repeat settings that seem to not be exported from three.js when I open the file in Blender. In Blender the whole texture takes up the MeshPlane that used to only have a small part of the texture showing in Three.js scene.
Might anyone know what I could add to the GLTF Exporter to be able to record and keep the repeat and offset texture settings?
Many Thanks :)
I've hit this myself.. and as far as I know, the answer is No.
Offset and Repeat are THREE.js specific features. Some other libraries have equivalents.. some engines use direct texture matrix manipulation to achieve the same effect.
One workaround is to modify your models UV coordinates before exporting to reflect the settings of texture.offset and texture.repeat.
You would basically multiply each vertex UV by the texture.repeat, and then add texture.offset. That would effectively "bake" those parameters into the model UV's, but then would require you to reset .repeat and .offset back to 1,1 and 0,0 respectively, in order to render the model correctly again in THREE.js.
Here's a slightly relevant thread from the GLTF working group:
https://github.com/KhronosGroup/glTF/issues/107
I have previously done a project of building a 3D environment that allow user to draw a shape and then extrude it into a 3D geometry (e.g. a circle into a cylinder) using a javascript library Three.js with a function called ExtrudeGeometry().
currently I am building a similar program but on mobile platform using Xamarin and monogame/Xna. Being able to extrude a shape into 3D geometry does make many thing much more easier, however, so far I haven't found a similar function that provides similar functionality. Is there a counterpart of ExtrudeGeometry in Three.js in monogame/Xna? or an alternative way to accomplish same result.
I am trying to write a layout extension and have already looked at the examples provided both from the existing extensions (e.g. arbor, cola, cose-bilkent, etc.) and the scaffolding here. The place where I am hung up is with the webGL renderer. In all of the examples, this is handled by the core (for canvas), if I am not mistaken. Is it possible to use a webGL renderer through three.js? If so, is it as simple as just creating/attaching the required webGL elements in the extension (e.g. scene, cameras, light-sources, etc.)?
The reason for the webGL push is that I want to implement a 3D adjacency matrix (I cannot remember where I found the paper, but someone had implemented this in a desktop application with subject, predicate, and object being the X,Y, and Z axes) and don't see another way to do that efficiently for large result sets on the order of 10-25K nodes/edges.
Cytoscape supports multiple renderers, but it does not support 3D co-ordinates. Positions are defined as (x, y).
You could add a 3D renderer if you like, but you'd have to use a data property for the z position, because the position object doesn't support z.
It's a lot of work to write a renderer, especially one that's performant and fully-featured. If you were to write a 2D renderer, you could reuse all the existing hittests, gestures/interaction, events, etc -- so you could focus on drawing tech. To write a 3D renderer, you'll have to do all the rendering logic from scratch.
If your data requires 3D (e.g. representing 3D atomic bonds or 3D protein structures), then writing a 3D renderer may be a good idea. If it's just to have a neat 3D effect, it's probably not worth it -- as 3D is much harder for users to navigate and understand.
I'm trying to build a camera calibration function for a three.js app and could very much do with some help.
I have a geometry in a 3d scene, lets say for this example it is a cube. As this object and scene exist in 3d, all properties of the cube are known.
I also have a physical copy of the geometry.
What I would like to is take a photo of the physical geometry and mark 4+ points on the image in x and y. These points correspond to 4+ points in the 3d geometry.
Using these point correlations, I would like to be able to work out the orientation of the camera to the geometry in the photo and then match a virtual camera to the 3d geometry in the three.js scene.
I looked into the possibility of using an AR lib such as JS-aruco or JSARToolkit. But these systems need a marker, where as my system need to be marker less. The user will choose the 4 (or more) points on the image.
I've been doing some research and identified that Tsai's algorithm for camera alignment should suit my needs.
While I have a good knowledge of javascript and three.js, My linear algebra isn't the best and hence am having a problem translating the algorithm into javascript.
If anyone could give me some pointers, or is able to explain the process of Tsai's algorithm in a javascript mannor I would be so super ultra thankful.