I'm currently working on a project that involves creating various kind of "rings" using Three.js.
My "rings" are generated using an algorithm I wrote myself. Basically I define 4 sections positioned at π, π/2, 3π/2 and 2π and then interpolate a ring from those sections using quadratic bezier curves. So far this works pretty well. What I would like to be able to do next, is to programmatically determine which vertices are on the "upper surface" of the ring.
How would I be able to achieve something like that?
The 4 shapes on the left are my 4 sections. The ring on the right is generated using those 4 sections
Well there are several ways to do this, in order to plan an optimal way we have to know more about your scenario... but besides that, why not
go by vertice index if you have the same amount of vertices for every section? (like, every vertex where % == 2 or 3 is on the outside)
calculate a distance from the vertex to the ring's center and see if it's over a certain treshhold (i.e. those further away are visible)
Figure out whether the vertex normal is in a somewhat similar direction like the radius or distance you calculated.
Up to you... if you're not satisfied some thorough explanation of your scenario might help a bit more.
Related
Eventually, I'd like to get to a place where I'm generating multiple fake "countries" on a fake "continent," but to start I boiled the problem down into something simple, and I'm already stuck at this first step:
Given an area (let's say 10 tiles x 10 tiles) and a number of polygons (let's say 6), I'm looking for a way to randomly give the polygons tile coordinates in a way that 6 polygons of approximately equal size (+/- 10% is fine, and honestly even planned) will fill up the entirety of the grid. Not the precise code per se, but even how I'd go about doing it on paper.
I've also thought about using a spiral. Starting from the approximate centre of the area (let's say [4, 4]), spiral around clockwise and cut off at 100 / 6 = ~16. While this seems like a really straight-forward approach, both on paper and in code, it certainly makes for weird-looking polygons:
And no matter how much I tweak some random variables (e.g., size of polygon, where I start, etc), it'll always look like that. In a variation, starting at the bottom left point and going up, then right, then down, then right, etc., yields the same:
To create something vaguely realistic, I'm thinking that I need to generate 6 centroids across my [10, 10] area, and then use the spiral method to create regions from that.
I find myself pretty quickly running into three problems:
How do I "equally space" out the centroids?
How do I handle the "overlap" areas, like shown with the ?s or ?!s (for the second pass-through)
How do I handle the "gap" areas, like shown with the letter G above?
And finally... is this centroid approach even the best method? I've heard of (and used, via 'clicking a button') k-means clustering... would it work to theoretically set my 100 points as input points and try to generate 6 clusters?
Any help would of course be much appreciated!
I'm not very experienced with this, but you could use a Voronoi triangulation. You would generate the points on your grid, spaced out randomly , then you would use a Voronoi diagram (https://en.wikipedia.org/wiki/Voronoi_diagram), to determine the countries. To make them more equal, you would use Lloyd relaxation (https://en.wikipedia.org/wiki/Lloyd's_algorithm) to equal them out.
I am working on a system to procedurally build meshes for "mines", right now I don't want to achieve visual perfection I am more focused on the basic.
I got the point in which I am able to generate the shape of the mines and from that generating the 2 meshes, one for the ground and one for the "walls" of the mine.
Now I am working on getting the UV mapping right but my problem is that the ground is really hard to map to UV coordinates properly and I am currently not able to get it right.
For the tessellation I am using a constrained version of the delaunay triangulation to which I added a sub-tessellation what simply splits the triangles at least once and keeps splitting them if the area of the triangle is greater than X.
Here a 2D rendering of the tessellation that highlight the contours, the triangles and the edges
Here the result of the 3D rendering (using three.js and webgl) with my current UV mapping applied (a displacement map as well, please ignore it for now).
I am taking a naive approach to the UV mapping, each vertex of a triangle in the grid is translated to values between 0 and 1 and that's it.
I think that, in theory should be right, but the issue is with the order of the vertexes that is creating a problem but if that would be the case the texture should be shown rotated or oddly not just oddly AND stretched like that.
Once I will get the UV mapping right, the next step would be to correctly implement the
I am currently writing this in javascript but any hint or solution in any language would be alright, I don't mind converting and/or re-engineering it to make it work.
My goal is to be able to procedurally build the mesh, send it to multiple clients and achieve the same visual rendering. I need to add quite a few bits and pieces after this other step is implemented so I can't rely on shaders on the client side because otherwise being able to place tracks, carts or something else on the ground would just be impossible for the server.
Once I will get these things sorted out, I will switch to Unity 3D for the rendering on the client side, webgl and three.js are currently being used just to have a quick and easy way to view what's being produced without the need of a client/server whole infrastructure.
Any suggestion?
Thanks!
I sorted out the issue in my code, it was pretty stupid though: by mistake I was adding 3 UV mappings per triangle and not 1 per point causing an huge visual mess. Sorted out that, I was able to achieve what I needed!
https://www.youtube.com/watch?v=DHF4YWYG7FM
Still a lot of work to do but starts to look decent!
I am a civil architect by profession, with a passion for maths, physics and computers. For one of my projects, I am designing a room with 3 straight walls and 4th curved wall. A sound source is near the left wall.
+-------
| \
| + |
| /
+-------
Having some spare time on my hands, I decided to try modeling the acoustics of this room using JavaScript and Canvas API. My goal was to calculate for every point in the room:
Net intensity of sound by summing sound coming directly from source and reflections off the walls (including curved one). This would include attenuation due to inverse square law and absorption by walls.
Reverb characteristics by keeping track of path lengths directly from source and reflections from walls. If a point in the room received reflected signal about 0.05 seconds after the primary signal arrives, we might have an echo problem.
I assumed a canvas size of 800x600 pixels and real world dimensions of the room as 45x44 feet (left wall = 44ft, top/bottom walls 31ft, curved wall radius 22ft) and sound source 5ft from left wall. I modeled each wall as a line or circle equation and wrote a function that tells me if a point is inside the room or not. For each pixel in the canvas, I convert it to a real world coordinate and calculate its distance from the source and use the inverse square law to calculate sound intensity. What I ended up was this:
However, needless to say, this only captures the primary bounce from the source. It doesn't capture any reflections and those are proving way too hard for me to calculate.
I'm looking for an insight into how I can do that. I have tried the following partially:
Instead of iterating points in the room grid-wise, I've tried to generate rays from the source. Calculating reflections off the straight walls is easy. But the curved wall presents some challenges. The biggest problem I'm having is this: If I start with 360 rays, the closest points to the source have way too many points per pixel, but as we move outwards, the points become so diluted that there may be tens of pixels between adjacent points. This also means that when I reflect a ray, it would most-certainly not land on the points created by the primary bounce and I wouldn't be able to simply add them up. Even if I interpolate, the result would not be correct as some points would register intensity due to primary bounce, even fewer would register intensity due to secondary/tertiary bounces, and many points would register nothing. In the image below, I've tried this approach with primary bounces only.
Iterate the room grid-wise. For each point in the room, calculate direct direction to the source and reflected location of source in each wall. Use these distances to calculate net intensity at every sample point in the room. This is easy to do for the straight walls. But the math turns EXTRAORDINARILY complicated and unsolvable for me for the curved wall.
X
+
+B
+ +
A O
Assume A is the source, O is the center of curve, B is the point in room we're currently testing, and X is a point on the curve. For secondary bounces, ‹AXO = ‹BXO. We know A, O and B. If we can find X, then BX needs to be extended backwards a distance equal to AX and the image of the source would be located there. The problem is that finding X is a very hard problem. And even if this can be done, it only accounts for secondary bounces. Tertiary bounces would be even harder to calculate.
I believe Option #2 is the better way to go about this. But I do not possess enough math/computer skills to tackle this problem on my own. At this point in time, I'm trying to solve this not for my project, but for my personal curiosity. I would be grateful if any of you can solve this and share your solutions. Or if you can just give me insight into this problem, I would be glad to work on this further.
I lack expertise in computing interpolation and ray-tracing (which would be required for this problem, I think).
Thanks,
Asim
So. With great pointers from lara and a deep dive into matrix math and bresenham's line rendering algorithms, I was finally able to complete this hobby project. =) Outlined here are the steps for anyone wishing to follow down this route for similar problems.
Ditch algebraic equations in favor of matrix math. Ditch lines in favor of parametric lines.
Represent walls in terms of rays and circles. Lines can be represented as [x y 1] = [x0 y0 1] + t*[dx dy 1]. Circles can be represented as (X - C)^2 = r^2.
Project rays outwards from the source. For each ray, calculate its intersection with one of the walls, and resize it to span from its starting point to the intersection point.
When an intersection point is calculated, calculate the normal vector of the wall at the point of intersection. For straight walls, calculating normal is simple ([-dy dx 1]). For circles, the normal is X - C.
Use matrices to reflect the incident ray about the normal.
Repeat the process for as many bounces as needed.
Map the World Coordinate System the plan is in to a Cell Coordinate System to divide the world into a grid of cells. Each cell can be 1'-0" x 1'-0" in size (in feet). Use matrices again for transformation between the two coordinate systems.
Use the transformation matrix to convert the ray to the Cell Coordinate System.
Use Bresemham's Line algorithm to determine which cells the ray passes thru. For each cell, use the inverse square law to calculate ray intensity for that cell. Add this value to the cells grid.
Finally, use Canvas API and another transformation matrix to convert between Cell Coordinate System to Screen Coordinate System and render the cells grid on screen.
Here are a few screenshots of what I achieved using this:
Rays of light emanating from the source and intersecting with the walls.
Calculating multiple reflections of a single ray.
Calculating single reflections of all rays emanating from the source.
Using Bresenham's Line algorithm to identify cells crossed by the ray and plotting logarithmic intensity.
Rendering all cells for all primary rays only.
Rendering all cells for reflected rays only. The caustic surface is clearly visible here.
Rendering all cells for primary and reflected rays using color-coding.
I got to learn a lot of very interesting math skills during the course of this project. I respect matrices a lot more now. All during my college years, I wondered if I would ever need to use the Bresenham's Line algorithm ever in my life, since all graphics libraries have line drawing algorithms build-in. For the first time, I found that I needed to directly use this algorithm and without it, this project would not have been possible.
I will make available the code on GitHub soon. Thanks to everyone who contributed to my understanding of these concepts.
Asim
I am trying to implement Field of View algorithm in my game and I followed the great tutorial here : sight-and-light and here is what I got so far :
As you can see it works fine for me :)
And then I try to use this algorithm in a tiled map. It also works fine but just a little bit slow so I am now trying to find a way to optimize the algorithm.
Some information might help to optimize :
The tiled map is Orthogonal
All the tiles have the same size 32 * 32 and are square
Tiles marked 0 means empty marked as 1 means a obstacle
I have used Connected Component Labelling algorithm for a preprocess :
All the obstacles have been merged into several regions
I know all the vertices position in each regions
Something like this :
Say I have 9 connected regions ( 9 polygons ) and 40 vertices in total.
Based on the algorithm in the above link, there will be :
ray-cast : 40 * 3 ( 3 ray-cast per vertices in angle +- 0.00001)
edges : 40
edge * ray-cast intersection test : 40 * 40 * 3 == 4800
I think there should be a way to reduce the ray cast count and the edges count that I need to do the intersection calculation in a above situation but just could not figure out a good solution.
Any suggestion will be appreciated, thanks :)
What you are doing can be optimized a great deal.
First there's not point in using all the vertices and no need to do any intersection tests.
Take each polygon. For each vertex figure out if it's an inversion point. That is take the ray from the origin/eye and check if the neighbor are on the same side. Figure out which direction goes towards the origin and follow it until you find another inversion point. The segment in between these point are the ones facing the origin/eye. If you have a convex shape there's only 2 of them, for more complex ones there can be more.
Next convert all the inversion point to polar coordinates and sort them by the angle. Now you should have a bunch of possibly overlapping intervals (take care about the warparound 360->0). If you scene is simple the intervals are non overlapping, so you only need to follow your polygons with the light (no tests needed). If you have encounter an overlap, take the inversion point and the current edge from the existing interval and see if the inversion point is on the same side as the origin/eye. If so then you can intersect the ray to the inversion point with the current edge to get the far point and link it with the inversion point which will now replace as the current edge. If there's an interval that doesn't belong to any polygon the rays will go to infinity (there's nothing to see for all rays in that polygon).
This works for all non-overlapping polygons.
So in your case you will get only 9*2 inversion points (your polygons are simple) and you need to do very few edge * ray intersections because their layout is fairly sparse and the algorithm is quick to discard obvious non-intersecting cases.
If this is for some real-time application you can optimize it further by taking advantage of the fact that the inversion points mostly remain the same, and if they change they travel along the polygon usually one edge (could be more if the polygon is small the the move distance is large).
So I have an animation that I'm coding in javascript and HTML5 (no libraries, no plugins, no nothing and I'd like it to stay that way). The animation uses physics (basically a bunch of unusual springs attached to masses) to simulate a simple liquid. The output from this part of the program is a grid (2d-array) of objects, each with a z value. This works quite nicely. My problem arises when drawing the data to an HTML5 Canvas.
That's what it looks like. Trust me, it's better when animated.
For each data point, the program draws one circle with a color determined by the z value. Just drawing these points, however, the grid pattern is painfully obvious and it is difficult to see the fluid that it represents. To solve this, I made the circles larger and more transparent so that they overlapped each other and the colors blended, creating a simple convolution blur. The result was both fast and beautiful, but for one small flaw:
As the circles are drawn in order, their color values don't stack equally, and so later-drawn circles obscure the earlier-drawn ones. Mathematically, the renderer is taking repeated weighted averages of the color-values of the circles. This works fine for two circles, giving each a value of 0.5*alpha_n, but for three circles, the renderer takes the average of the newest circle with the average of the other two, giving the newest circle a value of 0.5*alpha_n, but the earlier circles each a value of 0.25*alpha_n. As more circles overlap, the process continues, creating a bias toward newer circles and against older ones. What I want, instead, is for each of three or more circles to get a value of 0.33*alpha_n, so that earlier circles are not obscured.
Here's an image of alpha-blending in action. Notice that the later blue circle obscures earlier drawn red and green ones:
Here's what the problem looks like in action. Notice the different appearance of the left side of the lump.
To solve this problem, I've tried various methods:
Using different canvas "blend-modes". "Multiply" (as seen in the above image) did the trick, but created unfortunate color distortions.
Lumping together drawing calls. Instead of making each circle a separate canvas path, I tried lumping them all together into one. Unfortunately, this is incompatible with having separate fill colors and, what's more, the path did not blend with itself at all, creating a matte, monotone silhouette.
Interlacing drawing-order. Instead of drawing the circles in 0 to n order, I tried drawing first the evens and then the odds. This only partially solved the problem, and created an unsightly layering pattern in which the odds appeared to float above the evens.
Building my own blend mode using putImageData. I tried creating a manual pixel-shader to suit my needs using javascript, but, as expected, it was far too slow.
At this point, I'm a bit stuck. I'm looking for creative ways of solving or circumnavigating this problem, and I welcome your ideas. I'm not very interested in being told that it's impossible, because I can figure that out for myself. How would you elegantly draw a fluid from such data-points?
If you can decompose your circles into two groups (evens and odds), such that there is no overlap among circles within a group, the following sequence should give the desired effect:
Clear the background
Draw the evens with an alpha of 1.0 (opaque)
Draw the odds with an alpha of 1.0 (opaque)
Draw the evens with an alpha of 0.5
Places which are covered by neither evens nor odds will show the background. Those which are covered only by evens will show the evens at 100% opacity. Those covered by odds will show the odds with 100% opacity. Those covered by both will show a 50% blend.
There are other approaches one can use to try to blend three or more sets of objects, but doing it "precisely" is complicated. An alternative approach if one has three or more images that should be blended uniformly according to their alpha channel is to repeatedly draw all of the images while the global alpha decays from 1 to 0 (note that the aforementioned procedure actually does that, but it's numerically precise when there are only two images). Numerical rounding issues limit the precision of this technique, but even doing two or three passes may substantially reduce the severity of ordering-caused visual artifacts while using fewer steps than would be required for precise blending.
Incidentally, if the pattern of blending is fixed, it may be possible to speed up rendering enormously by drawing the evens and odds on separate canvases not as circles, but as opaque rectangles, and subtracting from the alpha channel of one of the canvases the contents of a a fixed "cookie-cutter" canvas or fill pattern. If one properly computes the contents of cookie-cutter canvases, this approach may be used for more than two sets of canvases. Determining the contents of the cookie-cutter canvases may be somewhat slow, but it only needs to be done once.
Well, thanks for all the help, guys. :) But, I understand, it was a weird question and hard to answer.
I'm putting this here in part so that it will provide a resource to future viewers. I'm still quite interested in other possible solutions, so I hope others will post answers if they have any ideas.
Anyway, I figured out a solution on my own: Before drawing the circles, I did a comb sort on them to put them in order by z-value, then drew them in reverse. The result was that the highest-valued objects (which should be closer to the viewer) were drawn last, and so were not obscured by other circles. The result is that the obscuring effect is still there, but it now happens in a way that makes sense with the geometry. Here is what the simulation looks like with this correction, notice that it is now symmetrical: