I am trying to build a raycasting-engine. i have successfully rendered a scene column by column using ctx.fillRect() as follows.
canvas-raycasting.png
demo
code
the code i wrote for above render:
var scene = [];// this contains distance of wall from player for an perticular ray
var points = [];// this contains point at which ray hits the wall
/*
i have a Raycaster class which does all math required for ray casting and returns
an object which contains two arrays
1 > scene : array of numbers representing distance of wall from player.
2 > points : contains objects of type { x , y } representing point where ray hits the wall.
*/
var data = raycaster.cast(wall);
/*
raycaster : instance of Raycaster class ,
walls : array of boundries constains object of type { x1 , y1 , x2 , y2 } where
(x1,y1) represent start point ,
(x2,y2) represent end point.
*/
scene = data.scene;
var scene_width = 800;
var scene_height = 400;
var w = scene_width / scene.length;
for(var i=0;i<scene.length;++i){
var c = scene[i] == Infinity ? 500 : scene[i] ;
var s = map(c,0,500,255,0);// how dark/bright the wall should be
var h = map(c,0,500,scene_height,10); // relative height of wall (farther the smaller)
ctx.beginPath();
ctx.fillStyle = 'rgb('+s+','+s+','+s+')';
ctx.fillRect(i*w,200-h*0.5,w+1,h);
ctx.closePath();
}
Now i am trying to build an web based FPS(First Person Shooter) and stucked on rendering wall-textures on canvas.
ctx.drawImage() mehthod takes arguments as follows
void ctx.drawImage(image, sx, sy, sWidth, sHeight, dx, dy, dWidth, dHeight);
ctx.drawImage_arguments
but ctx.drawImage() method draws image as a rectangle with no 3D effect like Wolfenstein 3D
i have no idea how to do it.
should i use ctx.tranform()? if yes, How ? if no, What should i do?
I am looking for the Maths used to produce pseudo 3d effect using 2D raycasting.
some pseudo 3d games are Wolfenstein 3D Doom
i am trying to build something like this
THANK YOU : )
The way that you're mapping (or not, as the case may be) texture coordinates isn't working as intended.
I am looking for the Maths used to produce pseudo 3d effect using 2D raycasting
The Wikipedia Entry for Texture Mapping has a nice section with the specific maths of how Doom performs texture mapping. From the entry:
The Doom engine restricted the world to vertical walls and horizontal floors/ceilings, with a camera that could only rotate about the vertical axis. This meant the walls would be a constant depth coordinate along a vertical line and the floors/ceilings would have a constant depth along a horizontal line. A fast affine mapping could be used along those lines because it would be correct.
A "fast affine mapping" is just a simple 2D interpolation of texture coordinates, and would be an appropriate operation for what you're attempting. A limitation of the Doom engine was also that
Doom renders vertical and horizontal spans with affine texture mapping, and is therefore unable to draw ramped floors or slanted walls.
It doesn't appear that your logic contains any code for transforming coordinates between various coordinate spaces. You'll need to apply transforms between a given raytraced coordinate and texture coordinate spaces in the very least. This typically involves matrix math and is very common and can also be referred to as Projection, as in projecting points from one space/surface to another. With affine transformations you can avoid using matrices in favor of linear interpolation.
The coordinate equation for this adapted to your variables (see above) might look like the following:
u = (1 - a) * wallStart + a * wallEnd
where 0 <= *a* <= 1
Alternatively, you could use a Weak Perspective projection, since you have much of the data already computed. From wikipedia again:
To determine which screen x-coordinate corresponds to a point at
A_x_,A_z_
multiply the point coordinates by:
B_x = A_x * B_z / A_z
where
B_x
is the screen x coordinate
A_x
is the model x coordinate
B_z
is the focal length—the axial distance from the camera center *to the image plane*
A_z
is the subject distance.
Because the camera is in 3D, the same works for the screen y-coordinate, substituting y for x in the above diagram and equation.
In your case A_x is the location of the wall, in worldspace. B_z is the focal length, which will be 1. A_z is the distance you calculated using the ray trace. The result is the x or y coordinate representing a translation to viewspace.
The main draw routine for W3D documents the techniques used to raytrace and transform coordinates for rendering the game. The code is quite readable even if you're not familiar with C/ASM and is a great way to learn more about your topics of interest. For more reading, I would suggest performing a search in your engine of choice for things like "matrix transformation of coordinates for texture mapping", or search the GameDev SE site for similar.
A specific area of that file to zero-in on would be this section starting ln 267:
> ========================
> =
> = TransformTile
> =
> = Takes paramaters:
> = tx,ty : tile the object is centered in
> =
> = globals:
> = viewx,viewy : point of view
> = viewcos,viewsin : sin/cos of viewangle
> = scale : conversion from global value to screen value
> =
> = sets:
> = screenx,transx,transy,screenheight: projected edge location and size
> =
> = Returns true if the tile is withing getting distance
> =
A great book on "teh Maths" is this one - I would highly recommend it for anyone seeking to create or improve upon these skills.
Update:
Essentially, you'll be mapping pixels (points) from the image onto points on your rectangular wall-tile, as reported by the ray trace.
Pseudo(ish)-code:
var image = getImage(someImage); // get the image however you want. make sure it finishes loading before drawing
var iWidth = image.width, iHeight = image.height;
var sX = 0, sY = 0; // top-left corner of image. Adjust when using e.g., sprite sheets
for(var i=0;i<scene.length;++i){
var c = scene[i] == Infinity ? 500 : scene[i];
var s = map(c,0,500,255,0);// how dark/bright the wall should be
var h = map(c,0,500,scene_height,10); // relative height of wall (farther the smaller)
var wX = i*w, wY = 200 - h * 0.5;
var wWidth = w + 1, wHeight = h;
//... render the rectangle shape
/* we are using the same image, but we are scaling it to the size of the rectangle
and placing it at the same location as the wall.
*/
var u, v, uW, vH; // texture x- and y- values and sizes. compute these.
ctx.drawImage(image, sX, sY, iWidth, iHeight, u, v, uW, vH);
}
Since I'm not familiar with your code performing the raytrace, its' coordinate system, etc, you may need to further adjust the values for wX, wY, wWidth, and wHeight (e.g., translate points from center to top-left corner).
Related
I am confused when implementing (measure) angles in an HTML5 canvas especially after rotating objects
Let's assume I have drawn a shape like this
ctx.moveTo(100,100)//center of canvas
ctx.lineTo(0,200)//left bottom
ctx.lineTo(200,200)//right bottom
We know it is a 45 degrees or pi*2/4 angle.
But how do I figure it out using Math functions to see if the shape was rotated or not?
And how to re-measure it after changing the x and y of the shape(first point) ?
First things first, you will need to make sure the points are stored in some sort of data structure, as it won't be easy to pull the points from the canvas itself. Something like an array of arrays:
var angle = [[100,100], [0,200], [200,200]];
Now, you need to convert your lines to vectors:
var AB = [angle[1][0]-angle[0][0], angle[1][1]-angle[0][1]];
var BC = [angle[2][0]-angle[1][0], angle[2][1]-angle[1][1]];
Now find the dot-product of the two:
var dot_product = (AB[0]*BC[0]) + (AB[1]*BC[1]);//not dot-product
Now you need to find the length (magnitude) of the vectors:
var ABmagnitude = Math.sqrt(Math.pow(AB[0],2) + Math.pow(AB[1],2));
var BCmagnitude = Math.sqrt(Math.pow(BC[0],2) + Math.pow(BC[1],2));
Now you put it all together by dividing the dot product by the product of the two magnitudes and getting the arcosine:
var theta = Math.acos(dot_product/(ABmagnitude*BCmagnitude));
You mentioned rotation, but unless you are only rotating one line, the angle will stay the same.
The reason I'm interested in canvases having any shape is that it would then be possible to cut out images with Bezier curves and have the text of a web page flow around the canvas shape, i.e. cut-out image.
What is needed is the possibility to have a free-form shaped div, SVG and HTML5 canvas. (Applied to SVG, I understand this would be equivalent to Flash symbols.) You could then imagine applying a box model (padding, border and margin) for shapes, but it wouldn't be a box (it would be parallel to the shape)!
I suppose it would also then be possible to have text that wraps inside a shape as much as text that flows around a shape.
I read an interesting blog post about "Creating Non-Rectangular Layouts with CSS Shapes" here: http://sarasoueidan.com/blog/css-shapes/
but it doesn't include text wrapping inside a shape.
Then, there's also a CSS Shapes editor for Brackets (a code editor):
http://blogs.adobe.com/webplatform/2014/04/17/css-shapes-editor-in-brackets/
As simple as it may sound it actually involves quite a few steps to achieve.
An outline would look something like this:
Define the shape as a polygon, ie. point array
Find bounds of polygon (the region the polygon fits inside)
Contract polygon with padding using either a cetronid algorithm or simply a brute-force approach using center of bounds
Define line height of text and use that as a basis for number of scan-lines
Basically use a polygon-fill algorithm to find segment within the shape which can fill in text. The steps for this is:
Use an odd/even scanner by getting an intersection point (using line intersection math) with text scan line and each of the lines between the points in the polygon
Sort the points by x
use odd and even point to create a segment. This segment will always be inside the polygon
Add clipping using original polygon
Draw in image
Use the segments to get a width. Start parsing the text to fill and measure the width.
When text width fits within the segment width then print the chars that fits
Repeat for next text/words/chars until end of text or segments
In other words: you would need to implement a polygon fill algorithm but instead of filling in lines (per pixel line) you use the line as basis for the text.
This is fully doable; actually, I went ahead to create a challenge for myself on this problem, for the fun of it, so I created a generic solution that I put on GitHub released under MIT license.
The principle described above are implemented, and to visualize the steps:
Define the polygon and padding - here I chose to just use a simple brute-force and calculate a smaller polygon based on center and a padding value - the light grey is the original polygon and the black obviously the contracted version:
The points are defined as an array [x1, y1, x2, y2, ... xn, yn] and the code to contract it (see link to project for full source on all these parts):
var pPoints = [],
i = 0, x, y, a, d, dx, dy;
for(; i < points.length; i += 2) {
x = points[i];
y = points[i+1];
dx = x - bounds.px;
dy = y - bounds.py;
a = Math.atan2(dy, dx);
d = Math.sqrt(dx*dx + dy*dy) - padding;
pPoints.push(bounds.px + d * Math.cos(a),
bounds.py + d * Math.sin(a));
}
Next step is to define the lines we want to scan. The lines are based on line height for font:
That is simple enough - just make sure the start and end points are outside the polygon.
We use an odd/even scan approach and check intersection of the scanline versus all lines in the polygon. If we get a intersect point we store that in a list for that line.
The code to detect intersecting lines is:
function getIntersection(line1, line2) {
// "unroll" the objects
var p0x = line1.x1,
p0y = line1.y1,
p1x = line1.x2,
p1y = line1.y2,
p2x = line2.x1,
p2y = line2.y1,
p3x = line2.x2,
p3y = line2.y2,
// calc difference between the coords
d1x = p1x - p0x,
d1y = p1y - p0y,
d2x = p3x - p2x,
d2y = p3y - p2y,
// determinator
d = d1x * d2y - d2x * d1y,
px, py,
s, t;
// if is not intersecting/is parallel then return immediately
if (Math.abs(d) < 1e-14)
return null;
// solve x and y for intersecting point
px = p0x - p2x;
py = p0y - p2y;
s = (d1x * py - d1y * px) / d;
if (s >= 0 && s <= 1) {
// if s was in range, calc t
t = (d2x * py - d2y * px) / d;
if (t >= 0 && t <= 1) {
return {x: p0x + (t * d1x),
y: p0y + (t * d1y)}
}
}
return null;
}
Then we sort the point for each line and use pairs of points to create segments - this is actually a polygon-fill algorithm. The result will be:
The code to build segments is a bit extensive for this post so check out the project linked above.
And finally we use those segments to replace with actual text. We need to scan a text from current text pointer and see how much will fit inside the segment width. The current code is somewhat basic and skips a lot of considerations such as word breaks, text base-line position and so forth, but for initial use it will do.
The result when put together will be:
Hope this gives an idea about the steps involved.
[EDIT: see this jsfiddle for a live example plus accompanying code]
Using three.js I'm trying to render out some celestial bodies with prominent features.
Unfortunately no examples are provided on how to apply spherical heightmaps with threejs but they do have an example where a heightmap is applied to a plane.
I took said example and modified it to use a SphereGeometry(); instead of a PlaneGeometry();
Obviously the geometry of a sphere is critically different from that of a plane, and when rendering out the results the sphere shows as a flat piece of texture.
The heightmap code for planes:
var plane = new THREE.PlaneGeometry( 2000, 2000, quality - 1, quality - 1 );
plane.applyMatrix( new THREE.Matrix4().makeRotationX( - Math.PI / 2 ) );
for ( var i = 0, l = plane.vertices.length; i < l; i ++ ) {
var x = i % quality, y = ~~ ( i / quality );
plane.vertices[ i ].y = data[ ( x * step ) + ( y * step ) * 1024 ] * 2 - 128;
}
Now I'm guessing the solution is relatively simple: instead of mapping to the plane's 2d coordinate in the for loop, it has to find the surface coordinate of the sphere in 3d space. Unfortunately I'm not really a pro at 3d maths so I'm pretty much stuck at this point.
An example of the heightmap applied to the sphere and all code is put together in this jsfiddle. An updated jsfiddle shows an altered sphere but with random data instead of the height map data.
I know for a fact you can distort the sphere's 3d points to generate these surface details, but I'd like to do so using a heightmap. This JSFiddle is as far as I got- it'll randomly alter points to give a rocky appearance to the sphere, but obviously doesnt look very natural.
EDIT: The following is the logic required I wish to implement that maps heightmap data to a sphere.
In order to map the data to a sphere, we will need to map coordinates from a simple spherical coordinate system (longitude φ, latitude θ, radius r) to Cartesian coordinates (x, y, z). Just as in normal height-mapping the data value at (x, y) is mapped to z, we will map the value at (φ, θ) to r. This transformation comes down to:
x = r × cos φ × sin θ
y = r × sin φ × sin θ
z = r × cos θ
r = Rdefault + Rscale × d(φ, θ)
The parameters Rdefault and Rscale can be used to control the size of the sphere and the height map on it.
Uses vector3 to move each vertices:
var vector = new THREE.Vector3()
vector.set(geometry.vertices[i].x, geometry.vertices[i].y, geometry.vertices[i].z);
vector.setLength(h);
geometry.vertices[i].x = vector.x;
geometry.vertices[i].y = vector.y;
geometry.vertices[i].z = vector.z;
Example: http://jsfiddle.net/damienlabat/b3or4up3/
If you want to apply a 2D map onto the 3D sphere surface, you will need to use UVs of the sphere. Fortunately, UVs come with THREE.SphereGeometry by default.
The UVs are stored per-face though, so you will need to iterate through the faces array.
For each face in the geometry:
Read the corresponding UV value in the FaceVertexUvs array for each associated vertex.
Read the height map value using that UV location.
Shift the vertex along the vertex normal by that value. The faces array gives the vertex index, which you can use to index into the vertices array to get/set the vertex position.
After this is all done, set verticesNeedUpdate to true to update the vertices.
I would like draw 3D points represented in image to 3D rectangle. Any idea how could I represent these in x,y and z axis
Here projection type is orthographic.
Thanks
Okay. Let's look at a simple example of what you are trying to accomplish it, and why this is such a complicated problem.
First, lets look a some projection functions. You need a way to mathematically describe how to transform a 3D (or higher dimensional) point into a 2D space (your monitor), or a projection.
The simpiest to understand is a very simple dimetric projection. Something like:
x' = x + z/2;
y' = y + z/4;
What does this mean? Well, x' is you x coordinate 2D projection: for every unit you move backwards in space, the projection will move that point half that many units to the right. And y' represents that same projection for your y coordinate: for every unit you move backwards in space, the projection will move that point a quarter unit up.
So a point at [0,0,0] will get projected to a 2d point of [0,0]. A point at [0,0,4] will get projected to a 2d point of [2,1].
Implemented in JavaScript, it would look something like this:
// Dimetric projection functions
var dimetricTx = function(x,y,z) { return x + z/2; };
var dimetricTy = function(x,y,z) { return y + z/4; };
Once you have these projection functions -- or ways to translate from 3D space into 2D space -- you can use them to start draw your image. A simple example of that using js canvas. First, some context stuff:
var c = document.getElementById("cnvs");
var ctx = c.getContext("2d");
Now, lets make a little helper to draw a 3D point:
var drawPoint = (function(ctx,tx,ty, size) {
return function(p) {
size = size || 3;
// Draw "point"
ctx.save();
ctx.fillStyle="#f00";
ctx.translate(tx.apply(undefined, p), ty.apply(undefined,p));
ctx.beginPath();
ctx.arc(0,0,size,0,Math.PI*2);
ctx.fill();
ctx.restore();
};
})(ctx,dimetricTx,dimetricTy);
This is pretty simple function, we are injecting the canvas context as ctx, as well as our tx and ty functions, which in this case our the dimetric functions we saw earlier.
And now a polygon drawer:
var drawPoly = (function(ctx,tx,ty) {
return function() {
var args = Array.prototype.slice.call(arguments, 0);
// Begin the path
ctx.beginPath();
// Move to the first point
var p = args.pop();
if(p) {
ctx.moveTo(tx.apply(undefined, p), ty.apply(undefined, p));
}
// Draw to the next point
while((p = args.pop()) !== undefined) {
ctx.lineTo(tx.apply(undefined, p), ty.apply(undefined, p));
}
ctx.closePath();
ctx.stroke();
};
})(ctx, dimetricTx, dimetricTy);
With those two functions, you could effectively draw the kind of graph you are looking for. For example:
// The array of points
var points = [
// [x,y,z]
[20,30,40],
[100,70,110],
[30,30,75]
];
(function(width, height, depth, points) {
var c = document.getElementById("cnvs");
var ctx = c.getContext("2d");
// Set some context
ctx.save();
ctx.scale(1,-1);
ctx.translate(0,-c.height);
ctx.save();
// Move our graph
ctx.translate(100,20);
// Draw the "container"
ctx.strokeStyle="#999";
drawPoly([0,0,depth],[0,height,depth],[width,height,depth],[width,0,depth]);
drawPoly([0,0,0],[0,0,depth],[0,height,depth],[0,height,0]);
drawPoly([width,0,0],[width,0,depth],[width,height,depth],[width,height,0]);
drawPoly([0,0,0],[0,height,0],[width,height,0],[width,0,0]);
ctx.stroke();
// Draw the points
for(var i=0;i<points.length;i++) {
drawPoint(points[i]);
}
})(150,100,150,points);
However, you should now be able to start to see some of the complexity of your actual question emerge. Namely, you asked about rotation, in this example we are using an extremely simple projection (our dimetric projection) which doesn't take much other than an oversimplified relationship between depth and its influences on x,y position. As the projections become more complex, you need to know more about your relationship/orientation in 3D space in order to create a reasonable 2D projection.
A working example of the above code can be found here. The example also includes isometric projection functions that can be swapped out for the dimetric ones to see how that changes the way the graph looks. It also does some different visualization stuff that I didn't include here, like drawing "shadows" to help "visualize" the actual orientation -- the limitations of 3D to 2D projections.
It's complicated, and even a superficial discussion is kind of beyond the scope of this stackoverflow. I recommend you read more into the mathematics behind 3D, there are plenty of resources, both online and in print form. Once you have a more solid understanding of the basics of how the math works then return here if you have a specific implementation question about it.
What you want to do is impossible to do using the method you've stated - this is because a box - when rotated in 3 dimensions won't look anything like that diagram of yours. It will also vary based on the type of projection you need. You can, however get started using three.js which is a 3D drawing library for Javascript.
Hope this helps.
How to Draw 3D Rectangle?
posted in: Parallelogram | updated on: 14 Sep, 2012
To sketch 3 - Dimensional Rectangle means we are dealing with the figures which are different from 2 – D figures, which would need 3 axes to represent them. So, how to draw 3D rectangle?
To start with, first make two lines, one vertical and another horizontal in the middle of the paper such that they represent a “t” letter of English. This is what we need to draw for temporary use and will be removed later after the construction of the 3 – D rectangle is complete. Next we draw a Square whose measure of each side is 1 inch. Square must be perfect in Geometry so that 90 degree angles that are formed at respective corners are exact in measure. Now starting from upper right corner of the square we draw a line segment that will be stretched to a measure of 2 inches in the direction at an angle of 45 degrees. Similarly, we repeat the procedure by drawing another Line Segment from the upper left corner of the square and stretching it to 2 inches length in the direction at an angle of 45 degrees. These 2 line segments are considered to be the diagonals with respect to the horizontal line that we drew temporarily in starting. Also these lines will be parallel to each other. Next we draw a line that joins the end Point of these two diagonals.
Next starting from the very right of the 2 inch diagonal end point, draw a line of measure 1 inch that is supposed to be perpendicular to the temporary horizontal line. Next we need to join the lower left corner of the square with end point of the last 1’’ line we drew in 4th step and finally we get our 3 - D rectangular. Now we can erase our initial “t”. This 3- D rectangle resembles a Cuboid.
I am working on a project in javascript involving google maps.
The goal is to figure out 16-20 coordinate points within n kilometers from a set of latitude longitude coordinates such that the 16 points if connected will form a circle around the original coordinates.
The end goal is to make it so I can figure out coordinates to plot and connect on google maps to make a circle around a given set of coordinates.
The code would go something like:
var coordinates = Array();
function findCoordinates(lat, long, range) {
}
coordinates = findCoordinates(-20, 40, 3);
Now to make the magic happen in the findCoordinates() function.
Basically what you're trying to do is find N points on the radius of a circle from a given point with a given radius. One simple way of doing it is splitting the 360 degrees of a circle in to N equal chunks, and finding the points at regular intervals.
The following should do roughly what you're after -
function findCoordinates(lat, long, range)
{
// How many points do we want? (should probably be function param..)
var numberOfPoints = 16;
var degreesPerPoint = 360 / numberOfPoints;
// Keep track of the angle from centre to radius
var currentAngle = 0;
// The points on the radius will be lat+x2, long+y2
var x2;
var y2;
// Track the points we generate to return at the end
var points = [];
for(var i=0; i < numberOfPoints; i++)
{
// X2 point will be cosine of angle * radius (range)
x2 = Math.cos(currentAngle) * range;
// Y2 point will be sin * range
y2 = Math.sin(currentAngle) * range;
// Assuming here you're using points for each x,y..
p = new Point(lat+x2, long+y2);
// save to our results array
points.push(p);
// Shift our angle around for the next point
currentAngle += degreesPerPoint;
}
// Return the points we've generated
return points;
}
The array of points you get back can then easily be used to draw the circle you wish on your google map.
If your overall goal however is just to draw a circle at a fixed radius around a point, then a far easier solution may be to use an overlay. I've found KMBox to be very easy to set up - you give it a central point, a radius and an image overlay (in your case, a transparent circle with a visible line around the edge) and it takes care of everything else, including resizing it on zoom in/out.
I had to find some code to calculate Great Circle distances a while back (just Google "Great Circle" if you don't know what I'm talking about) and I found this site:
http://williams.best.vwh.net/gccalc.htm
You might be able to build up your own JavaScript code to do your lat/lon range calculations using the JavaScript from that site as a reference. It sounds to me like you just need to divide up the 360 degrees of a circle into an equal number of pieces and draw a line out to an equal distance from the center at each "bearing". Once you know the lat/lon at the other end of each bearing/distance line, then connecting the dots to form a polygon is trivial.