Creating a simple physics engine for fun, wondering how rotation is determined when two rectangles collide.
Looking for something like this: https://i.stack.imgur.com/MmAcP.png
Related
Let's say we have a 3d cube (actually shapes can be of any form, very complex potentially, but let's start with a cube) represented in XYS coordinates system, we are looking to the cube from some distant point under some angle to front face (for example camera can look at cube on the same angle to all XYZ axes).
How to detect invisible faces programmatically (in this example bottom, left and back will be hidden).
The simple way of removing non-visible triangles is called Backface Culling. It's basically the idea that three 3D points are projected on the 2D screen and, at that point, are arranged clockwise or counter-clockwise. Based on that arrangement the Normal Vector sticks into or out of the screen. Solely based on the orientation of the normal vector you can tell if the triangle is seen from the front side or the back side. You just drop the triangles you see from the back side.
This is a very simple approach, but what you really are asking is different. If you have convex/concave 3D polyhedra you want to know if a front-facing triangle is still visible or obstructed by other triangles of the same structure.
This is technically very complex and generally not advisable to do. Look at this simple example:
+--------+
| | <-- rectangle R
+--------+
+--------+
| |
| +--+ |
| | | | <-- U shape
| | | |
+--+ +--+
If we overlay one to the other you get this:
+--------+
| |
| +--+ |
| |RR| | <-- RR is rectangle R looking through the U shape
| |--| |
+--+ +--+
In this actually simple example you have to compute the intersections of the two polygons to see if the rectangle R behind shape U is completely obstructed or if there are parts still visible.
As the complexity of the polyhedron increases, so does the algorithm needed to decide which triangles are visible and which aren't.
In other words: this is not a viable approach unless it's not realtime and for some reasons maybe not even graphics-related.
In CG you use instead a Z-buffer and draw ALL front-facing triangles (that means you did backface culling beforehand) and keep the depth (Z) in a buffer. While rendering the fragments (think "pixels") you check if the Z is above or below the previously rendered fragments and as such can decide if the single "pixel" is visible or not.
So until you draw ALL triangles, on current CG hardware (GPU) there's no way to tell if a triangle is fully/partially visible (or fully obstructed) in realtime (that is at 60 Hertz).
Obviously there are algorithms which can do visibility checking on the CPU (not necessarily in realtime) but anyways, there is next to 0 probability that you can implement something equivalent. Those algorithms are extremely complicated and require months of study...
So backface culling is quick and dirty, visibility checking is very complicated and only feasible if you use an external tool and Z-buffer requires to actually render the triangles, so it's not an optimization technique to avoid drawing unnecessary geometry.
Since your shapes can be arbitrary, a method that would work would be to use ray casting: https://en.wikipedia.org/wiki/Ray_casting.
Basically, you will have to cast rays (vectors) in your scene and calculate the intersection of those rays with your object, this can be done with simple math.
If a ray hits more than a point, you know that the first point is visible but the following hits will be on invisible surfaces.
I have written a path tracer that uses this technique to detect what is visible to a camera: https://github.com/jo-va/hop.
Feel free to have a look and reuse some code!
Hope that helps.
I have a simulation with xyz angles and xyz G forces. All three G forces are drawn (scaled according to their value in floats) as arrays(vectors) from common 0,0,0 point.
Now i want to draw a resultant force from said point and from three said vectors.
Question is,how to do it in three.js? Or maybe someone has a nice example of that.
Using Phaser I created a game in "Arcade Mode". The game has a snowball, and I need to set the physics size to a sprite that is not like a square (i.e. a circle). How do I do that?
setSize doesn't have radius property:
setSize(width, height, offsetX, offsetY)
Phaser's Arcade physics system is based upon bounding rectangles. If you want to use the Arcade physics system for your game, then you'll need to represent your snowball as a rectangular object in the system (which doesn't need to be the same size as the sprite; it could be a smaller size within).
If you want to use a circular size then you'll have to look at one of the other physics systems that Phaser supports. In your case P2 is probably what you're looking for.
See the official Phaser example 'Collide Custom Bounds' for an example of using a circle to define bounds in P2.
It effectively involves adding P2 physics to the object, or group the object is in, and then using something like snowball.body.setCircle(16);
I'm making a 3D game, and I was told here and here that I should always perform collision detection on the server-side. Now, the problem is that I don't know how! Collision between players on a flat plane are easy, because players are represented by a cylinder, but, how do I do collision detection when the map itself is a model with hills and so on? Is is possible to somehow import a 3D model on a Node.js server? And then, say I do have the model imported, are there some collision detection libraries, or will I have to do the math myself? (My last resort would be converting the models to JSON (models are already converted to JSON) and then reading in all the vertices, but that would be a pain.)
The models are made in Blender and can be converted to .obj, .dae or .js (JSON) (this is what I currently use).
If there are no such modules, that allow the models to be imported and collisions to be checked against them, I guess I could make it myself. In that case, please do provide some hints, or further reading on how to test if a point is colliding with an object defined by some vertices.
EDIT: The "map" will have objects on it, like trees and houses. I was thinking, maybe even caves. It will also have hills and what-not, but a heightmap is out of the question I guess.
If you're going for a do-it-yourself solution, I'd suggest the following.
Terrain
Preferably have the terrain be a grid, with just each vertex varying in height. The result is a bunch of quads, each with the same dimensions in the (x, y) plane. Vertices have varying (z) values to make up the slopes. The grid arrangement allows you to easily determine which triangles you will have to test for intersection when performing collision checks.
If that's not an option (terrain pre-built in modeling package), you'll probably want to use an in-memory grid anyway, and store a list of triangles that are (partially) inside each cell.
Checking for collision
The easiest approach would be to consider your actors points in space. Then you'd determine the height of the terrain at that point as follows:
Determine grid cell the point is in
Get triangles associated with cell
Get the triangle containing the point (in the (x, y) plane)
Get height of triangle/plane at point
In the case of the "pure grid" terrain, step 3 involves just a single point/plane check to determine which of the 2 triangles we need. Otherwise, you'd have to do a point-in-triangle check for each triangle (you can probably re-use point/plane checks or use BSP for further optimization).
Step 4 pseudo-code:
point = [ x, y, z ] // actor position
relativePt = point - triangle.vertices[0]
normal = triangle.plane.normal
distance = relativePt DOT normal // (this is a dot-product)
intersection = [
point.x,
point.y,
point.z + distance / normal.z ]
This calculates the intersection of the ray straight up/down from the actor position with the triangle's plane. So that's the height of the terrain at that (x, y) position. Then you can simply check if the actor's position is below that height, and if so, set its z-coordinate to the terrain height.
Objects (houses, trees, ... )
Give each object 1 or more convex collision volumes that together roughly correspond to its actual shape (see this page on UDN to see how the Unreal Engine works with collision hulls for objects).
You will have to use some spatial subdivision technique to quickly determine which of all world objects to check for collision when moving an actor. If most movement is in 2 dimensions (for example, just terrain and some houses), you could use a simple grid or a quadtree (which is like a grid with further subdivisions). A 3-dimensional option would be the octree.
The point of the spatial subdivision is the same as with the terrain organized as a grid: To associate place objects with cells/volumes in space so you can determine the set of objects to check for a collision, instead of checking for collision with all objects.
Checking for collision
Get the "potential collision objects" using the spatial subdivision technique you've used; f.e. get the objects in the actor's current grid cell.
For each convex collision volume of each object:
Using the separating axis theorem, determine if the actor intersects with the collision volume. See my answer to a different post for some implementation hints (that question is about the 2D case, but the code largely applies; just read "edge" as "plane").
If collision occurs, use the normal of one of the "offending planes" to move the actor to just next to that plane.
Note: In this case, model your actor's collision volume as a box or 3-sided cylinder or so.
Also, you may want to consider building a BSP tree for each object and use axis-aligned bounding boxes for your actors instead. But that's getting a little beyond the scope of this answer. If your objects will have more complicated collision volumes, that will be faster.
Final thoughts
Well, this is already a really long answer, and these are just some broad strokes. Collision is a pretty broad topic, because there are so many different approaches you can take depending on your needs.
For example, I haven't covered "trace collision", which is detecting collision when an actor moves. Instead, the above suggestion on objects checks if an actor is inside an object. This may or may not suit your needs.
I also just realized I haven't covered actor-vs-actor collision. Perhaps that's best done as colliding 2 circles in the (x, y) plane, with an additional check to see if their vertical spaces intersect.
Anyway, I really gotta wrap this up. Hopefully this will at least point you in the right direction.
We are trying to create a simple programming environment that lets people generate 3D forms (it's inspired by the Scratch project). We'd like it to work in a "turtle geometry" fashion, where a creature (we call it the beetle, by analogy to the logo turtle) moves around the 3D space, and can leave objects along the way that take on its position and orientation.
We are currently using Three.js. While you can move and rotate objects, it's not clear how to create the effect we want, where translations and rotations "accumulate" and can be applied to new objects. We'd also like to store a stack of these matrices to push and pop.
Here's a specific example. The user of our system would create a program like this (this is pseudocode):
repeat 36 [
move 10
rotate 10
draw cube
]
The idea is that the beetle would move around a circle as it executes this program, leaving a cube at each position.
Is this possible using Three.js? Do we need to switch to pure WebGL?
You might have a look at
http://u2d.com/turtle_js/index.html
It uses JavaScript systax though.
And it does not have a feedback in case of errors. The turtle just does nothing.
The requirements sound fairly high-level. There is no reason you would need the low-level access of raw webgl. A higher-level library like three.js, or another, would do just fine.
As #Orbling pointed out, you'd need to figure out rotation for 3D; e.g.
rotateX 10
(rotate 10 degrees counterclockwise around x axis), or
turnLeft 10
(rotate 10 degrees counterclockwise around current up vector).