I have a basic particle system in JavaScript (utilising canvas for rendering), and I'm trying to find the best way to handle collisions between particles. The particle system can handle about 70,000 particles at a pretty decent FPS.
It consists of an array that contains every Particle object.
Each Particle object contains 3 Vector objects (one for displacement, velocity, and acceleration) which contain an x and a y variable.
Before each frame, acceleration vectors are applied to velocity vectors, and velocity vectors are applied to displacement vectors for every single Particle object.
The renderer then iterates through each Particle and then draws a 1x1 pixel square at the location of every displacement vector.
The particle system also has 'magnetic' fields also, which can cause the particles to accelerate towards/away from a given point.
I tried applying a 'magnetic' field to each particle, but the calculations I use to get the updated acceleration vectors for each particle are too inefficient, and this method reduced the FPS considerably.
Below is the code I use to recalculate Particle acceleration vectors, with respect to nearby magnetic fields (This function is called before every frame):
Particle.prototype.submitToFields = function (fields) {
// our starting acceleration this frame
var totalAccelerationX = 0;
var totalAccelerationY = 0;
// for each passed field
for (var i = 0; i < fields.length; i++) {
var field = fields[i];
// find the distance between the particle and the field
var vectorX = field.point.x - this.point.x;
var vectorY = field.point.y - this.point.y;
// calculate the force via MAGIC and HIGH SCHOOL SCIENCE!
var force = field.mass / Math.pow(vectorX*vectorX+vectorY*vectorY,1.5);
// add to the total acceleration the force adjusted by distance
totalAccelerationX += vectorX * force;
totalAccelerationY += vectorY * force;
}
// update our particle's acceleration
this.acceleration = new Vector(totalAccelerationX, totalAccelerationY);
}
It's obvious why the above method reduced the performance drastically - the number of calculations rises exponentially with every new particle added.
Is there another method of particle collision detection that will have good performance with thousands of particles? Will these methods work with my current object structure?
Don't create a new Vector here. It means that you're creating 70 000 new Vectors each frame. Just change the vector values :
this.acceleration.x = totalAccelerationX; // or : this.acceleration[0] = totalAccelerationX;
this.acceleration.y = totalAccelerationY; // or : this.acceleration[1] = totalAccelerationY;
If it doesn't helps enough, you'll have to use a WebWorker.
Related
I am trying to build a raycasting-engine. i have successfully rendered a scene column by column using ctx.fillRect() as follows.
canvas-raycasting.png
demo
code
the code i wrote for above render:
var scene = [];// this contains distance of wall from player for an perticular ray
var points = [];// this contains point at which ray hits the wall
/*
i have a Raycaster class which does all math required for ray casting and returns
an object which contains two arrays
1 > scene : array of numbers representing distance of wall from player.
2 > points : contains objects of type { x , y } representing point where ray hits the wall.
*/
var data = raycaster.cast(wall);
/*
raycaster : instance of Raycaster class ,
walls : array of boundries constains object of type { x1 , y1 , x2 , y2 } where
(x1,y1) represent start point ,
(x2,y2) represent end point.
*/
scene = data.scene;
var scene_width = 800;
var scene_height = 400;
var w = scene_width / scene.length;
for(var i=0;i<scene.length;++i){
var c = scene[i] == Infinity ? 500 : scene[i] ;
var s = map(c,0,500,255,0);// how dark/bright the wall should be
var h = map(c,0,500,scene_height,10); // relative height of wall (farther the smaller)
ctx.beginPath();
ctx.fillStyle = 'rgb('+s+','+s+','+s+')';
ctx.fillRect(i*w,200-h*0.5,w+1,h);
ctx.closePath();
}
Now i am trying to build an web based FPS(First Person Shooter) and stucked on rendering wall-textures on canvas.
ctx.drawImage() mehthod takes arguments as follows
void ctx.drawImage(image, sx, sy, sWidth, sHeight, dx, dy, dWidth, dHeight);
ctx.drawImage_arguments
but ctx.drawImage() method draws image as a rectangle with no 3D effect like Wolfenstein 3D
i have no idea how to do it.
should i use ctx.tranform()? if yes, How ? if no, What should i do?
I am looking for the Maths used to produce pseudo 3d effect using 2D raycasting.
some pseudo 3d games are Wolfenstein 3D Doom
i am trying to build something like this
THANK YOU : )
The way that you're mapping (or not, as the case may be) texture coordinates isn't working as intended.
I am looking for the Maths used to produce pseudo 3d effect using 2D raycasting
The Wikipedia Entry for Texture Mapping has a nice section with the specific maths of how Doom performs texture mapping. From the entry:
The Doom engine restricted the world to vertical walls and horizontal floors/ceilings, with a camera that could only rotate about the vertical axis. This meant the walls would be a constant depth coordinate along a vertical line and the floors/ceilings would have a constant depth along a horizontal line. A fast affine mapping could be used along those lines because it would be correct.
A "fast affine mapping" is just a simple 2D interpolation of texture coordinates, and would be an appropriate operation for what you're attempting. A limitation of the Doom engine was also that
Doom renders vertical and horizontal spans with affine texture mapping, and is therefore unable to draw ramped floors or slanted walls.
It doesn't appear that your logic contains any code for transforming coordinates between various coordinate spaces. You'll need to apply transforms between a given raytraced coordinate and texture coordinate spaces in the very least. This typically involves matrix math and is very common and can also be referred to as Projection, as in projecting points from one space/surface to another. With affine transformations you can avoid using matrices in favor of linear interpolation.
The coordinate equation for this adapted to your variables (see above) might look like the following:
u = (1 - a) * wallStart + a * wallEnd
where 0 <= *a* <= 1
Alternatively, you could use a Weak Perspective projection, since you have much of the data already computed. From wikipedia again:
To determine which screen x-coordinate corresponds to a point at
A_x_,A_z_
multiply the point coordinates by:
B_x = A_x * B_z / A_z
where
B_x
is the screen x coordinate
A_x
is the model x coordinate
B_z
is the focal length—the axial distance from the camera center *to the image plane*
A_z
is the subject distance.
Because the camera is in 3D, the same works for the screen y-coordinate, substituting y for x in the above diagram and equation.
In your case A_x is the location of the wall, in worldspace. B_z is the focal length, which will be 1. A_z is the distance you calculated using the ray trace. The result is the x or y coordinate representing a translation to viewspace.
The main draw routine for W3D documents the techniques used to raytrace and transform coordinates for rendering the game. The code is quite readable even if you're not familiar with C/ASM and is a great way to learn more about your topics of interest. For more reading, I would suggest performing a search in your engine of choice for things like "matrix transformation of coordinates for texture mapping", or search the GameDev SE site for similar.
A specific area of that file to zero-in on would be this section starting ln 267:
> ========================
> =
> = TransformTile
> =
> = Takes paramaters:
> = tx,ty : tile the object is centered in
> =
> = globals:
> = viewx,viewy : point of view
> = viewcos,viewsin : sin/cos of viewangle
> = scale : conversion from global value to screen value
> =
> = sets:
> = screenx,transx,transy,screenheight: projected edge location and size
> =
> = Returns true if the tile is withing getting distance
> =
A great book on "teh Maths" is this one - I would highly recommend it for anyone seeking to create or improve upon these skills.
Update:
Essentially, you'll be mapping pixels (points) from the image onto points on your rectangular wall-tile, as reported by the ray trace.
Pseudo(ish)-code:
var image = getImage(someImage); // get the image however you want. make sure it finishes loading before drawing
var iWidth = image.width, iHeight = image.height;
var sX = 0, sY = 0; // top-left corner of image. Adjust when using e.g., sprite sheets
for(var i=0;i<scene.length;++i){
var c = scene[i] == Infinity ? 500 : scene[i];
var s = map(c,0,500,255,0);// how dark/bright the wall should be
var h = map(c,0,500,scene_height,10); // relative height of wall (farther the smaller)
var wX = i*w, wY = 200 - h * 0.5;
var wWidth = w + 1, wHeight = h;
//... render the rectangle shape
/* we are using the same image, but we are scaling it to the size of the rectangle
and placing it at the same location as the wall.
*/
var u, v, uW, vH; // texture x- and y- values and sizes. compute these.
ctx.drawImage(image, sX, sY, iWidth, iHeight, u, v, uW, vH);
}
Since I'm not familiar with your code performing the raytrace, its' coordinate system, etc, you may need to further adjust the values for wX, wY, wWidth, and wHeight (e.g., translate points from center to top-left corner).
Hello I am a bit new to 3d programming. I am trying to improve the efficiency of a particle system that I am simulating with liquid fun.
Currently I am drawing the particle system this way:
for (var j = 0; j < maxParticleSystems; j++) {
var currentParticleSystem = world.particleSystems[j];
var particles = currentParticleSystem.GetPositionBuffer();
var maxParticles = particles.length;
for (var k = 0; k < maxParticles; k += 2) {
context.drawImage(particleImage, (particles[k] * mToPx) + offsetX, (particles[k + 1] * mToPx) + offsetY);
context.fill();
}
}
This basically draws each particle one at a time which is very slow. I have been doing some reading and I read about Position Buffer objects in webGL. How would I use one to draw these?
This is arguably too broad a question for Stack Overflow. WebGL is just a rasterization API which means there's an infinite number of ways to render and/or compute particles with it.
Some common ways
Compute particle positions in JavaScript, Render with POINTS in WebGL
Compute particle positions in JavaScript, Render with quads in WebGL (rendering quads lets you orient the particles)
Compute particle positions based on time alone in a shader, render POINTS.
Compute particle positions based on time alone in a shader, render quads
Compute particle positions in shaders with state by reading and writing state to a texture through a framebuffer
And hundreds of other variations.
Particle system using webgl
Efficient particle system in javascript? (WebGL)
I'm using Three.js to create points on a sphere, similar to the periodic table of elements example.
My data set is circles of irregular size, and I wish to evenly distribute them around the surface of a sphere. After numerous hours searching the web, I realize that is much harder than it sounds.
Here are examples of this idea in action:
Vimeo
Picture
circlePack Java applet
Is there an algorithm that will allow me to do this? The packing ratio doesn't need to be super high and it'd ideally be something quick and easy to calculate in JavaScript for rendering in Three.js (Cartesian or Coordinate system). Efficiency is key here.
The circle radii can vary widely. Here's an example using the periodic table code:
Here's a method to try: an iterative search using a simulated repulsive force.
Algorithm
First initialize the data set by arranging the circles across the surface in any kind of algorithm. This is just for initialization, so it doesn't have to be great. The periodic table code will do nicely. Also, assign each circle a "mass" using its radius as its mass value.
Now begin the iteration to converge on a solution. For each pass through the main loop, do the following:
Compute repulsive forces for each circle. Model your repulsive force after the formula for gravitational force, with two adjustments: (a) objects should be pushed away from each other, not attracted toward each other, and (b) you'll need to tweak the "force constant" value to fit the scale of your model. Depending on your math ability you may be able to calculate a good constant value during planning; other wise just experiment a little at first and you'll find a good value.
After computing the total forces on each circle (please look up the n-body problem if you're not sure how to do this), move each circle along the vector of its total calculated force, using the length of the vector as the distance to move. This is where you may find that you have to tweak the force constant value. At first you'll want movements with lengths that are less than 5% of the radius of the sphere.
The movements in step 2 will have pushed the circles off the surface of the sphere (because they are repelling each other). Now move each circle back to the surface of the sphere, in the direction toward the center of the sphere.
For each circle, calculate the distance from the circle's old position to its new position. The largest distance moved is the movement length for this iteration in the main loop.
Continue iterating through the main loop for a while. Over time the movement length should become smaller and smaller as the relative positions of the circles stabilize into an arrangement that meets your criteria. Exit the loop when the movement legth drops below some very small value.
Tweaking
You may find that you have to tweak the force calculation to get the algorithm to converge on a solution. How you tweak depends on the type of result you're looking for. Start by tweaking the force constant. If that doesn't work, you may have to change the mass values up or down. Or maybe change the exponent of the radius in the force calculation. For example, instead of this:
f = ( k * m[i] * m[j] ) / ( r * r );
You might try this:
f = ( k * m[i] * m[j] ) / pow( r, p );
Then you can experiment with different values of p.
You can also experiment with different algorithms for the initial distribution.
The amount of trial-and-error will depend on your design goals.
Here is something you can build on perhaps. It will randomly distribute your spheres along a sphere. Later we will iterate over this starting point to get an even distribution.
// Random point on sphere of radius R
var sphereCenters = []
var numSpheres = 100;
for(var i = 0; i < numSpheres; i++) {
var R = 1.0;
var vec = new THREE.Vector3(Math.random(), Math.random(), Math.random()).normalize();
var sphereCenter = new THREE.Vector3().copy(vec).multiplyScalar(R);
sphereCenter.radius = Math.random() * 5; // Random sphere size. Plug in your sizes here.
sphereCenters.push(sphereCenter);
// Create a Three.js sphere at sphereCenter
...
}
Then run the below code a few times to pack the spheres efficiently:
for(var i = 0; i < sphereCenters.length; i++) {
for(var j = 0; j < sphereCenters.length; j++) {
if(i === j)
continue;
// Calculate the distance between sphereCenters[i] and sphereCenters[j]
var dist = new THREE.Vector3().copy(sphereCenters[i]).sub(sphereCenters[j]);
if(dist.length() < sphereSize) {
// Move the center of this sphere to compensate.
// How far do we have to move?
var mDist = sphereSize - dist.length();
// Perturb the sphere in direction of dist magnitude mDist
var mVec = new THREE.Vector3().copy(dist).normalize();
mVec.multiplyScalar(mDist);
// Offset the actual sphere
sphereCenters[i].add(mVec).normalize().multiplyScalar(R);
}
}
}
Running the second section a number of times will "converge" on the solution you are looking for. You have to choose how many times it should be run in order to find the best trade-off between speed, and accuracy.
You can use the same code as in the periodic table of elements.
The rectangles there do not touch, so you can get the same effect with circles, virtually by using the same code.
Here is the code they have:
var vector = new THREE.Vector3();
for ( var i = 0, l = objects.length; i < l; i ++ ) {
var phi = Math.acos( -1 + ( 2 * i ) / l );
var theta = Math.sqrt( l * Math.PI ) * phi;
var object = new THREE.Object3D();
object.position.x = 800 * Math.cos( theta ) * Math.sin( phi );
object.position.y = 800 * Math.sin( theta ) * Math.sin( phi );
object.position.z = 800 * Math.cos( phi );
vector.copy( object.position ).multiplyScalar( 2 );
object.lookAt( vector );
targets.sphere.push( object );
}
What is PlaneBufferGeometry exactly and how it is different from PlaneGeometry? (r69)
PlaneBufferGeometry is a low memory alternative for PlaneGeometry. the object itself differs in a lot of ways. for instance, the vertices are located in PlaneBufferGeometry are located in PlaneBufferGeometry.attributes.position instead of PlaneGeometry.vertices
you can take a quick look in the browser console to figure out more differences, but as far as i understand, since the vertices are usually spaced on a uniform distance (X and Y) from each other, only the heights (Z) need to be given to position a vertex.
The main differences are between Geometry and BufferGeometry.
Geometry is a "user-friendly", object-oriented data structure, whereas BufferGeometry is a data structure that maps more directly to how the data is used in the shader program. BufferGeometry is faster and requires less memory, but Geometry is in some ways more flexible, and certain operations can be done with greater ease.
I have very little experience with Geometry, as I have found that BufferGeometry does the job in most cases. It is useful to learn, and work with, the actual data structures that are used by the shaders.
In the case of a PlaneBufferGeometry, you can access the vertex positions like this:
let pos = geometry.getAttribute("position");
let pa = pos.array;
Then set z values like this:
var hVerts = geometry.heightSegments + 1;
var wVerts = geometry.widthSegments + 1;
for (let j = 0; j < hVerts; j++) {
for (let i = 0; i < wVerts; i++) {
//+0 is x, +1 is y.
pa[3*(j*wVerts+i)+2] = Math.random();
}
}
pos.needsUpdate = true;
geometry.computeVertexNormals();
Randomness is just an example. You could also (another e.g.) plot a function of x,y, if you let x = pa[3*(j*wVerts+i)]; and let y = pa[3*(j*wVerts+i)+1]; in the inner loop. For a small performance benefit in the PlaneBufferGeometry case, let y = (0.5-j/(hVerts-1))*geometry.height in the outer loop instead.
geometry.computeVertexNormals(); is recommended if your material uses normals and you haven't calculated more accurate normals analytically. If you don't supply or compute normals, the material will use the default plane normals which all point straight out of the original plane.
Note that the number of vertices along a dimension is one more than the number of segments along the same dimension.
Note also that (counterintuitively) the y values are flipped with respect to the j indices: vertices.push( x, - y, 0 ); (source)
I'm trying to smooth the data I'm getting from the deviceOrientation API to make a Google Cardboard application in the browser.
I'm piping the accelerometer data straight into the ThreeJs camera rotation but we're getting a lot of noise on the signal which is causing the view to judder.
Someone suggested a Kalman filter as the best way to approach smoothing signal processing noise and I found this simple Javascript library on gitHub
https://github.com/itamarwe/kalman
However its really light on the documentation.
I understand that I need to create a Kalman model by providing a Vector and 3 Matrices as arguments and then update the model, again with a vector and matrices as arguments over a time frame.
I also understand that a Kalman filter equation has several distinct parts: the current estimated position, the Kalman gain value, the current reading from the orientation API and the previous estimated position.
I can see that a point in 3D space can be described as a Vector so any of the position values, such as an estimated position, or the current reading can be a Vector.
What I don't understand is how these parts could be translated into Matrices to form the arguments for the Javascript library.
Well, I wrote the abhorrently documented library a couple of years ago. If there's interest I'm definitely willing to upgrade it, improve the documentation and write tests.
Let me shortly explain what are all the different matrices and vectors and how they should be derived:
x - this is the vector that you try to estimate. In your case, it's probably the 3 angular accelerations.
P - is the covariance matrix of the estimation, meaning the uncertainty of the estimation. It is also estimated in each step of the Kalman filter along with x.
F - describes how X develops according to the model. Generally, the model is x[k] = Fx[k-1]+w[k]. In your case, F might be the identity matrix, if you expect the angular acceleration to be relatively smooth, or the zero matrix, if you expect the angular acceleration to be completely unpredictable. In any case, w would represent how much you expect the acceleration to change from step to step.
w - describes the process noise, meaning, how much does the model diverge from the "perfect" model. It is defined as a zero mean multivariate normal distribution with covariance matrix Q.
All the variables above define your model, meaning what you are trying to estimate. In the next part, we talk about the model of the observation - what you measure in order to estimate your model.
z - this is what you measure. In your case, since you are using the accelerometers, you are measuring what you are also estimating. It will be the angular accelerations.
H - describes the relation between your model and the observation. z[k]=H[k]x[k]+v[k]. In your case, it is the identity matrix.
v - is the measurement noise and is assumed to be zero mean Gaussian white noise with covariance R[k]. Here you need to measure how noisy are the accelerometers, and calculate the noise covariance matrix.
To summarize, the steps to use the Kalman filter:
Determine x[0] and P[0] - the initial state of your model, and the initial estimation of how accurately you know x[0].
Determine F based on your model and how it develops from step to step.
Determine Q based on the stochastic nature of your model.
Determine H based on the relation between what you measure and what you want to estimate (between the model and the measurement).
Determine R based on the measurement noise. How noisy is your measurement.
Then, with every new observation, you can update the model state estimation using the Kalman filter, and have an optimal estimation of the state of the model(x[k]), and of the accuracy of that estimation(P[k]).
var acc = {
x:0,
y:0,
z:0
};
var count = 0;
if (window.DeviceOrientationEvent) {
window.addEventListener('deviceorientation', getDeviceRotation, false);
}else{
$(".accelerometer").html("NOT SUPPORTED")
}
var x_0 = $V([acc.x, acc.y, acc.z]); //vector. Initial accelerometer values
//P prior knowledge of state
var P_0 = $M([
[1,0,0],
[0,1,0],
[0,0,1]
]); //identity matrix. Initial covariance. Set to 1
var F_k = $M([
[1,0,0],
[0,1,0],
[0,0,1]
]); //identity matrix. How change to model is applied. Set to 1
var Q_k = $M([
[0,0,0],
[0,0,0],
[0,0,0]
]); //empty matrix. Noise in system is zero
var KM = new KalmanModel(x_0,P_0,F_k,Q_k);
var z_k = $V([acc.x, acc.y, acc.z]); //Updated accelerometer values
var H_k = $M([
[1,0,0],
[0,1,0],
[0,0,1]
]); //identity matrix. Describes relationship between model and observation
var R_k = $M([
[2,0,0],
[0,2,0],
[0,0,2]
]); //2x Scalar matrix. Describes noise from sensor. Set to 2 to begin
var KO = new KalmanObservation(z_k,H_k,R_k);
//each 1/10th second take new reading from accelerometer to update
var getNewPos = window.setInterval(function(){
KO.z_k = $V([acc.x, acc.y, acc.z]); //vector to be new reading from x, y, z
KM.update(KO);
$(".kalman-result").html(" x:" +KM.x_k.elements[0]+", y:" +KM.x_k.elements[1]+", z:" +KM.x_k.elements[2]);
$(".difference").html(" x:" +(acc.x-KM.x_k.elements[0])+", y:" +(acc.y-KM.x_k.elements[1])+", z:" +(acc.z-KM.x_k.elements[2]))
}, 100);
//read event data from device
function getDeviceRotation(evt){
// gamma is the left-to-right tilt in degrees, where right is positive
// beta is the front-to-back tilt in degrees, where front is positive
// alpha is the compass direction the device is facing in degrees
acc.x = evt.alpha;
acc.y = evt.beta;
acc.z = evt.gamma;
$(".accelerometer").html(" x:" +acc.x+", y:" +acc.y+", z:" +acc.z);
}
Here is a demo page showing my results
http://cardboard-hand.herokuapp.com/kalman.html
I've set sensor noise to a 2 scalar matrix for now to see if the Kalman was doing its thing but we have noticed the sensor has greater variance in the x axis when the phone is lying flat on the table. We think this might be an issue with Gimbal lock. We haven't tested but its possible the variance changes in each axis depending on the orientation of the device.