how do we create turtle geometry in three.js? - javascript

We are trying to create a simple programming environment that lets people generate 3D forms (it's inspired by the Scratch project). We'd like it to work in a "turtle geometry" fashion, where a creature (we call it the beetle, by analogy to the logo turtle) moves around the 3D space, and can leave objects along the way that take on its position and orientation.
We are currently using Three.js. While you can move and rotate objects, it's not clear how to create the effect we want, where translations and rotations "accumulate" and can be applied to new objects. We'd also like to store a stack of these matrices to push and pop.
Here's a specific example. The user of our system would create a program like this (this is pseudocode):
repeat 36 [
move 10
rotate 10
draw cube
]
The idea is that the beetle would move around a circle as it executes this program, leaving a cube at each position.
Is this possible using Three.js? Do we need to switch to pure WebGL?

You might have a look at
http://u2d.com/turtle_js/index.html
It uses JavaScript systax though.
And it does not have a feedback in case of errors. The turtle just does nothing.

The requirements sound fairly high-level. There is no reason you would need the low-level access of raw webgl. A higher-level library like three.js, or another, would do just fine.
As #Orbling pointed out, you'd need to figure out rotation for 3D; e.g.
rotateX 10
(rotate 10 degrees counterclockwise around x axis), or
turnLeft 10
(rotate 10 degrees counterclockwise around current up vector).

Related

Programmatically build meshes - UV mapping

I am working on a system to procedurally build meshes for "mines", right now I don't want to achieve visual perfection I am more focused on the basic.
I got the point in which I am able to generate the shape of the mines and from that generating the 2 meshes, one for the ground and one for the "walls" of the mine.
Now I am working on getting the UV mapping right but my problem is that the ground is really hard to map to UV coordinates properly and I am currently not able to get it right.
For the tessellation I am using a constrained version of the delaunay triangulation to which I added a sub-tessellation what simply splits the triangles at least once and keeps splitting them if the area of the triangle is greater than X.
Here a 2D rendering of the tessellation that highlight the contours, the triangles and the edges
Here the result of the 3D rendering (using three.js and webgl) with my current UV mapping applied (a displacement map as well, please ignore it for now).
I am taking a naive approach to the UV mapping, each vertex of a triangle in the grid is translated to values between 0 and 1 and that's it.
I think that, in theory should be right, but the issue is with the order of the vertexes that is creating a problem but if that would be the case the texture should be shown rotated or oddly not just oddly AND stretched like that.
Once I will get the UV mapping right, the next step would be to correctly implement the
I am currently writing this in javascript but any hint or solution in any language would be alright, I don't mind converting and/or re-engineering it to make it work.
My goal is to be able to procedurally build the mesh, send it to multiple clients and achieve the same visual rendering. I need to add quite a few bits and pieces after this other step is implemented so I can't rely on shaders on the client side because otherwise being able to place tracks, carts or something else on the ground would just be impossible for the server.
Once I will get these things sorted out, I will switch to Unity 3D for the rendering on the client side, webgl and three.js are currently being used just to have a quick and easy way to view what's being produced without the need of a client/server whole infrastructure.
Any suggestion?
Thanks!
I sorted out the issue in my code, it was pretty stupid though: by mistake I was adding 3 UV mappings per triangle and not 1 per point causing an huge visual mess. Sorted out that, I was able to achieve what I needed!
https://www.youtube.com/watch?v=DHF4YWYG7FM
Still a lot of work to do but starts to look decent!

Can I use WebGL camera to animate the transitions between a set of polygons?

A friend just recommended looking into WebGL instead of css transitions. I have a set of polygons that make up a 2d board game.
Basically, the app moves the player space by space starting at the top of the "C" and we want to make first person view of moving to the next space in the sequence.
The points are plotted and I was thinking in terms of normalizing each shape, then rotating them into the proper direction, then adding perspective by transforming translateZ, and finally transitioning them along nthe interior angles from space to space while thinking of how to sequence those transitions between spaces.
Is there an easier way to move a WebGL camera through the spaces instead of pushing the polygons through transitions to simulate perspective? Perhaps a library that helps with this?
Thanks all!
WebGL doesn't have a camera. WebGL is a rasterization library. It draws pixels (or 4 value things) into arrays (the canvas, textures, renderbuffers). Cameras are something you implement yourself in JavaScript or you use a library like Three.js that has them implemented for you.

3D models on a Node.js server

I'm making a 3D game, and I was told here and here that I should always perform collision detection on the server-side. Now, the problem is that I don't know how! Collision between players on a flat plane are easy, because players are represented by a cylinder, but, how do I do collision detection when the map itself is a model with hills and so on? Is is possible to somehow import a 3D model on a Node.js server? And then, say I do have the model imported, are there some collision detection libraries, or will I have to do the math myself? (My last resort would be converting the models to JSON (models are already converted to JSON) and then reading in all the vertices, but that would be a pain.)
The models are made in Blender and can be converted to .obj, .dae or .js (JSON) (this is what I currently use).
If there are no such modules, that allow the models to be imported and collisions to be checked against them, I guess I could make it myself. In that case, please do provide some hints, or further reading on how to test if a point is colliding with an object defined by some vertices.
EDIT: The "map" will have objects on it, like trees and houses. I was thinking, maybe even caves. It will also have hills and what-not, but a heightmap is out of the question I guess.
If you're going for a do-it-yourself solution, I'd suggest the following.
Terrain
Preferably have the terrain be a grid, with just each vertex varying in height. The result is a bunch of quads, each with the same dimensions in the (x, y) plane. Vertices have varying (z) values to make up the slopes. The grid arrangement allows you to easily determine which triangles you will have to test for intersection when performing collision checks.
If that's not an option (terrain pre-built in modeling package), you'll probably want to use an in-memory grid anyway, and store a list of triangles that are (partially) inside each cell.
Checking for collision
The easiest approach would be to consider your actors points in space. Then you'd determine the height of the terrain at that point as follows:
Determine grid cell the point is in
Get triangles associated with cell
Get the triangle containing the point (in the (x, y) plane)
Get height of triangle/plane at point
In the case of the "pure grid" terrain, step 3 involves just a single point/plane check to determine which of the 2 triangles we need. Otherwise, you'd have to do a point-in-triangle check for each triangle (you can probably re-use point/plane checks or use BSP for further optimization).
Step 4 pseudo-code:
point = [ x, y, z ] // actor position
relativePt = point - triangle.vertices[0]
normal = triangle.plane.normal
distance = relativePt DOT normal // (this is a dot-product)
intersection = [
point.x,
point.y,
point.z + distance / normal.z ]
This calculates the intersection of the ray straight up/down from the actor position with the triangle's plane. So that's the height of the terrain at that (x, y) position. Then you can simply check if the actor's position is below that height, and if so, set its z-coordinate to the terrain height.
Objects (houses, trees, ... )
Give each object 1 or more convex collision volumes that together roughly correspond to its actual shape (see this page on UDN to see how the Unreal Engine works with collision hulls for objects).
You will have to use some spatial subdivision technique to quickly determine which of all world objects to check for collision when moving an actor. If most movement is in 2 dimensions (for example, just terrain and some houses), you could use a simple grid or a quadtree (which is like a grid with further subdivisions). A 3-dimensional option would be the octree.
The point of the spatial subdivision is the same as with the terrain organized as a grid: To associate place objects with cells/volumes in space so you can determine the set of objects to check for a collision, instead of checking for collision with all objects.
Checking for collision
Get the "potential collision objects" using the spatial subdivision technique you've used; f.e. get the objects in the actor's current grid cell.
For each convex collision volume of each object:
Using the separating axis theorem, determine if the actor intersects with the collision volume. See my answer to a different post for some implementation hints (that question is about the 2D case, but the code largely applies; just read "edge" as "plane").
If collision occurs, use the normal of one of the "offending planes" to move the actor to just next to that plane.
Note: In this case, model your actor's collision volume as a box or 3-sided cylinder or so.
Also, you may want to consider building a BSP tree for each object and use axis-aligned bounding boxes for your actors instead. But that's getting a little beyond the scope of this answer. If your objects will have more complicated collision volumes, that will be faster.
Final thoughts
Well, this is already a really long answer, and these are just some broad strokes. Collision is a pretty broad topic, because there are so many different approaches you can take depending on your needs.
For example, I haven't covered "trace collision", which is detecting collision when an actor moves. Instead, the above suggestion on objects checks if an actor is inside an object. This may or may not suit your needs.
I also just realized I haven't covered actor-vs-actor collision. Perhaps that's best done as colliding 2 circles in the (x, y) plane, with an additional check to see if their vertical spaces intersect.
Anyway, I really gotta wrap this up. Hopefully this will at least point you in the right direction.

Rendering box2d rectangle as a 3d rectangle in three.js

When rendering box2d bodies with the canvas, there's the big problem that box2d can only gives us the centre of the body and its angle (while the canvas draws shapes like rectangles from the top-left angle). By looking at some tutorials from Seth Land I've bypassed this problem and now works fine (I would never been able to figure out something like this by myself):
g.ctx.save();
g.ctx.translate(b.GetPosition().x*SCALE, b.GetPosition().y*SCALE);
g.ctx.rotate(b.GetAngle());
g.ctx.translate(-(b.GetPosition().x)*SCALE, -(b.GetPosition().y)*SCALE);
g.ctx.strokeRect(b.GetPosition().x*SCALE-30,b.GetPosition().y*SCALE-5,60,10);
g.ctx.restore();
where b is the body, SCALE is the canvas-pixel/box2d-meters converter, 60 and 10 are the dimensions of the rectangle (so 30 and 5 are the half dimensions).
_
The problem
Now, I would like to render this on three.js. First objection is that three.js works in 3d while box2d doesn't. No problem on that: I just want to build a 3d rectangle on top of the 2d one. The 2d axis dimensions must be from box2d while I want the last dimension to be totally arbitrary.
My work doesn't require 3d physics, 2d will be sufficient but I would like to give these 2d objects a thickness in the third dimension. How can I do that?! So far I've been trying like this:
// look from above
camera.position.set(0,1000,0);
camera.rotation.set(-1.5,0,0);
geometry = new THREE.CubeGeometry( 200, 60, 10 , 1,1,1);
// get b as box2d rectangle
loop(){
mesh.rotation.y = -b.GetAngle();
mesh.position.x = b.GetPosition().x*SCALE;
mesh.position.y = -b.GetPosition().y*SCALE;
}
Unfortunately, this doesn't look as the box2d debug draw at all. -1.5 is not the right value to turn the camera of a perfect 90 degree angle (does anyone know the exact value?) and, even worst, the three.js rectangle doesn't follow the box2d movements, having almost the same problems I had with the standard canvas before using context translate and context rotation.
I hope anyone have time to explain a possible solution. Thanks a lot in advance! :)
EDIT: it looks someone did it already
(requires webgl on chrome): http://game.2x.io/
http://serv1.aelag.com:8082/threeBox
The second one are just spheres so no problem on the rotation mapping between box2d and threejs. The first one also includes cubes and rectangles: that what I'm trying to do.
What type of camera are you using?
For getting a "2d" effect you need to use an Orthographic camera, otherwise you will get perspective-projected stuff and things won't match with what you're seeing in 2D.

Background with three.js

Can anybody help me with three.js?
I need to draw background, something, like a THREE.Sprite, but it neet to be UNDER any 3d object, that will draw later. I have a camera, that can be move only on Z axis.
I tryed to use:
cube mapping shader - PROBLEM: artefacts with shadow planes, it's unstable draw
THREE.Sprite that dublicate camera moving - PROBLEM: artefacts with shadow plane - it have a edge highlighting OR drawing only other spirtes without objects.
HTML DOM Background - PROBLEM: big and ugly aliasing in models.
What can I try more? Thanks!
You could maybe try drawing in several passes, i.e. making a first render of the background scene to a buffer, and then a second one over the first "buffer". Maybe using the buffer as background (painting it in 2D with an orthographic projection, and disabling depth buffer writes in that pass).
I haven't tried it myself with three.js, but that's how I'd do that with "traditional" OpenGL.
If you want a "3d" background i.e. something that will follow the rotation of your camera, but not react to the movement (be infinitely far), then the only way to do it is with a cubemap.
The other solution is a environment dome - a fully 3d object.
If you want a static background, then you should be able todo just a html background, i'm not sure why this would fail and what 'aliasing in models' you are talking about.

Categories