I have an image that I am mapping to a PlaneGeometry using the TextureLoader#load() — it renders nicely as long as my camera is not looking "along" the plane, in which case it (of course) is of zero-width, and disappears from view.
I'm trying to figure out how to give the image some depth, so my current ideas are:
Generate a ParticleSystem where the particle at (x, y) shares its color with the (greyscale) pixel in the image at (x, y) — effectively, making each pixel a camera-facing billboard
"Cube-ifying" each pixel in the image so it takes up some space in the z direction. So a pixel at (x, y) becomes a cube at (x, y, z) with some size in all three dimensions.
Both allow me to look 'along' the edge of the image and see something, whereas the plane is invisible.
My first question is, is this possible, and second, what is the name for this voxelization technique? Is it natively supported in three.js?
Very happy to provide more info or some basic code if that's at all helpful.
It is definite possible.
The idea with the cubes should be doable in every decent 3d-engine.
Its just a matter of cloning / instancing cubes, and than position and color them accordingly.
Not sure if there is a name for this technique.
Also not sure about particle-systems in threejs, but i suppose that would be a option too.
My real answer is this: Use Heightmaps
They kind of do exactly what you want. A heightmap is a greyscale texture, and for each verticle of the geometry it is mapped on, it will offset the y-component of the verticles positions according to the grayscale (white = no offset, black = full offset)
http://danni-three.blogspot.de/2013/09/threejs-heightmaps.html
Related
I saw a d3 helper function which can create an svg arc: making an arc in d3.js
Is there a way to bend an existing svg into an arc in d3.js? If not, how can it be done in JavaScript? The initial image could be a square with negative space inside that makes a silhouette of something like a cat.
I included a drawing of what I want. I eventually want to make more and create a ring, and then concentric rings of smaller transformations.
I think conformal mapping is the way to go. As an example, I have started with the original square source and the circularly arranged (and appropriately warped) squares using conformal mapping. See the Circular Image and the Corresponding Square Image. Note, however, that I first converted the SVG source into a png image, and then applied conformal mapping transformation to each pixel in the png image. If anyone has a method of mapping entirely in the SVG domain [where the source uses only straight lines and bezier curves], pl. let us know!
I am not able to provide the code -- it is too complex and uses proprietary libraries). Here is a way to do it.
In order to compute the conformal mapping, I used a complex algebra library. JEP is one example. For each pixel in the OUTPUT image, you set Z = (x, y) where (x, y) are the coordinates of the pixel and Z is a complex variable with real part x and imaginary part y. Then you compute NEWZ = f(Z) where f is any function you want. for circle, I used an expression such as "(ln(Z*1)/pi)*6-9" where the log (ln) provides circularity, the 6 implies 6 bent squares in a circle and the -9 is a scaling factor to center and place the image. From the (X, Y) values of the NEWZ, you look up the old image to find the pixel to place in the new image. When the (X, Y) values are outside the size of the old image, I just use tiling of the old image to determine the pixel. When X and Y are fractional (non-integer) you average the neighbors in a weighted way.
Is there a way to create a Three.js 3D line series with width and thickness?
Even though the Three.js line object supports linewidth, this attribute is not yet supported in all browsers on all platforms in WebGL.
Here's where you set linewidth in Three.js:
var material = new THREE.LineBasicMaterial({
color: 0xff0000,
linewidth: 5
});
The Three.js ribbon object - which had width - has recently been dropped.
The Three.js tube object generates 3D extrusions but - being Bezier-based - the lines do not pass through the control points.
Can anybody think of a method of drawing a line series (polylines, plotlines) in Three.js that has some sort of user definable 'bulk' such as width, thickness or radius?
This question may be a restating of this question:
Extruding a graph in three.js.
Given that I do not think that there is a readily available method, I would be happy to participate in an effort to create a simple function that responds to this question.
But a response that points to an existing workable method would be cool...
As WestLangley suggests, one possible solution includes the polyline being of constant pixel width - as is currently available with the Three.js canvas renderer.
A comparison of the two renderers is shown here:
Canvas and WebGL Lines Compared via GitHub Pages
Canvas and WebGL Lines Compared via jsFiddle
A solution where you could specify linewidth and similar results occurred on both renderers would be very cool.
There are, however, other ways of thinking of 3D lines where lines have actual physical constructs. They cast shadows, they respond to events. These also need to be looked into.
Here are links to GitHub Pages with two demos of lines made up of multiple meshes:
Sphere and Cylinder Polylines
An 'expensive solution. Each joint is made up of a full sphere.
Cubes Polylines
My guess is that building either of these as smooth single meshes will be complex to problems to solve. So in the meantime here is a link to a partial visualization of 3D lines that are wide and have height:
3D Box Line on jsFiddle
The goal is have to code 'with a low level of complexity - in other words - for dummies'. Thus a 3D line should be as easy and as familiar as adding a sphere or cube. Geometry + material = mesh > scene. And the geometry should be quite economical in terms of creating vertices and faces.
The lines should have width and height. Up is always in the Y direction. The demo shows this. What the demo does not show is corners being mitred nicely...
I cooked up a possible solution which I believe meets most of your requirements:
http://codepen.io/garciahurtado/pen/AGEsf?editors=001
The concept is fairly simple: render any arbitrary geometry in "wireframe mode", then apply a full screen GLSL shader to it to add thickness to the wireframe lines.
The shader is inspired by the blur shaders in the ThreeJS distro, which essentially copy the image a bunch of times along the horizontal and vertical axis. I automated that process and made the number of copies a user defined parameter, while ensuring that the copies were offset by 1 pixel.
I used a 3D cube mesh in my demo (with an ortho camera), but it should be trivial to convert it to a poly line.
The real meat and potatoes of this thing is in the custom shader (fragment shader portion):
uniform sampler2D tDiffuse;
uniform int edgeWidth;
uniform int diagOffset;
uniform float totalWidth;
uniform float totalHeight;
const int MAX_LINE_WIDTH = 30; // Needed due to weird limitations in GLSL around for loops
varying vec2 vUv;
void main() {
int offset = int( floor(float(edgeWidth) / float(2) + 0.5) );
vec4 color = vec4( 0.0, 0.0, 0.0, 0.0);
// Horizontal copies of the wireframe first
for (int i = 0; i < MAX_LINE_WIDTH; i++) {
float uvFactor = (float(1) / totalWidth);
float newUvX = vUv.x + float(i - offset) * uvFactor;
float newUvY = vUv.y + (float(i - offset) * float(diagOffset) ) * uvFactor; // only modifies vUv.y if diagOffset > 0
color = max(color, texture2D( tDiffuse, vec2( newUvX, newUvY ) ));
// GLSL does not allow loop comparisons against dynamic variables. Workaround below
if(i == edgeWidth) break;
}
// Now we create the vertical copies
for (int i = 0; i < MAX_LINE_WIDTH; i++) {
float uvFactor = (float(1) / totalHeight);
float newUvX = vUv.x + (float(i - offset) * float(-diagOffset) ) * uvFactor; // only modifies vUv.x if diagOffset > 0
float newUvY = vUv.y + float(i - offset) * uvFactor;
color = max(color, texture2D( tDiffuse, vec2( newUvX, newUvY ) ));
if(i == edgeWidth) break;
}
gl_FragColor = color;
}
Pros:
No need for additional geometry beyond the line vertices
Line thickness is user definable
A full screen shader should be relatively gentle on the GPU
Can be implemented fully within the WebGL canvas
Cons:
Line thickness is close to pixel perfect on horizontal and vertical edges, but slightly off on diagonal edges. This is due to the algorithm used and is a limitation of the solution. Having said that, for low line thickness and complex geometries, this is barely noticeable with the naked eye.
The joints between lines will show gaps for large enough line thickness. You can play with the Codepen demo to see what I mean. I started to implement a solution to this by adding a second "diagonal pass", but it got a little hairy and I think this would only be an issue for higher line thicknesses (+8 pixels) or extreme line angles. If you are interested in this solution, you can look at the original source to see where I was going with it.
Since this uses a full screen filter, you can only use the WebGL context for displaying objects of this thickness. Showing various line widths would require additional rendering passes.
As a potential solution. You could take your 3d points, then use THREE.Vector3.project method to figure out screen-space coordinates. Then simply use canvas and it's lineTo and moveTo operations. Canvas 2d context does support variable line thickness.
var w = renderer.domElement.innerWidth;
var h = renderer.domElement.innerHeight;
vector.project(camera);
context2d.lineWidth = 3;
var x = (vector.x+1)*(w/2);
var y = h - (vector.y+1)*(h/2);
context2d.lineTo(x,y);
Also, i don't think you can use the same canvas for that, so it would have to be a layer (another canvas) above your gl rendering context canvas.
If you have infrequent camera changes - it is also possible to construct line out of polygons and update it's vertex positions based on camera transform. For orthographic camera this would work best as only rotations would require vertex position manipulation.
Lastly, you could disable canvas clearing and draw your lines several times with offset inside a circle or a box. After that you can re-enable clearing. This would require several extra draw operations, but it's probably the most scalable approach.
The reason lines don't work as you'd expect out of the box is due to how ANGLE works, it's used in Chrome and in Firefox to my knowledge, it emulates OpenGL via DirectX. Guys from ANGLE state that WebGL spec only requires support of line thickness up-to 1, so they do not see it as a bug and don't intend to "fix" it. Line thickness should work on non-windows OSs though, where ANGLE is not used.
i'm using Three.js (without shaders, only with existing objects methods) in order to realize animations, but my question is very simple : i'm sure it's possible, but can you tell me (or help me) how should i combine several animations on a shape ? For example, rotating and translating a sphere.
When i'm doing :
three.sphere.rotation.y += 0.1;
three.sphere.translateZ += 1;
the sphere rotates but the translation vector is also rotating, so the translation has no effect.
I know a bit openGL and i already have used glPushMatrix and glPopMatrix functions, so do them exist in this framework ?
Cheers
Each three.js object3D has a position, rotation and scale; the rotation (always relative to its origin or "center") defines its own local axis coordinates (say, what the object sees as its own "front,up, right" directions) and when you call translateZ, the object is moved according to those local directions (not along the world -or parent- Z axis). If you want the later, do three.sphere.position.z += 1 instead.
The order of transformation is important. You get a different result if you translate first and then rotate than if you rotate first and then translate. Of course with a sphere it will be hard to see the rotation.
I am using the Three.JS library to display a point cloud in a web brower. The point cloud is generated once at start up and no further points are added or removed. But it does need to be rotated, panned and zoomed. I've gone through the tutorial about creating particles in three.js here
Using the example I can create particles that are squares or use an image of a sphere to create a texture. The image is closer to what I want, but is it possible to generate the point clouds without using the image? The sphere geometry for example.
The problem with the image is that when you have thousands of points it seems they sometimes obscure each other around the edges. From what I can gather it seems like the black region in a point's png file blocks the image immediately behind the current point. (But it is transparent to points further behind)
This obscuring of the images is the reason I would like to generate the points using shapes. I have tried replacing particles = new THREE.Geometry() with THREE.SphereGeometry(radius, segments, rings) and tried to change the vertices to spheres.
So my question is. How do I modify the example code so that it renders spheres (or points) instead of squares? Also, is a particle system the most efficient system for my particular case or should I just generate the particles and set their individual positions? As I mentioned I only generate the points once, but then rotate, zoom, pan the points. (I used the TrackBall sample code to get the mouse events working).
Thanks for your help
I don't think rendering a point cloud with spheres is very efficient. You should be able to get away with a particle system and use a texture or a small canvas program to draw a circle.
One of the first three.js sample uses a canvas program, here are the important bits:
var PI2 = Math.PI * 2;
var program = function ( context )
{
context.beginPath();
context.arc( 0, 0, 1, 0, PI2, true );
context.closePath();
context.fill();
};
var particle = new THREE.Particle( new THREE.ParticleCanvasMaterial( {
color: Math.random() * 0x808008 + 0x808080,
program: program
} ) );
Feel free to adapt the code for the WebGL renderer.
Another clever solution I've seen in the examples is using an encoded webm video to store the data and pass that to a GLSL shader which is rendered through a particle system in three.js
If your point cloud comes from a Kinect, these resources might be useful:
DepthCam
KinectJS
When comparing my code to http://threejs.org/examples/#webgl_custom_attributes_particles3
I saw the only difference was:
vec4 outColor = texture2D( texture, gl_PointCoord );
if ( outColor.a < 0.5 ) discard;
gl_FragColor = outColor;
Added to the fragment shader, fixed this problem for me.
It wasn't z fighting because randomly, some corners would overlap distant particles.
material.alphaTest = 0.5 didn't work and turning off depth writes/tests messed up the viewing order.
The problem with the image is that when you have thousands of points
it seems they sometimes obscure each other around the edges. From what
I can gather it seems like the black region in a point's png file
blocks the image immediately behind the current point. (But it is
transparent to points further behind)
You can get rid of the transparency overlapping problem of the underlying square structure by turning
depthTest:false
The problem then is, if you are adding additional objects to the scene the depth-testing will fail and the PointCloud will be rendered in front of the other objects, ignoring the actual order. To get around that you can additionally turn off
depthWrite:false
I want to check a collision between two Sprites in HTML5 canvas. So for the sake of the discussion, let's assume that both sprites are IMG objects and a collision means that the alpha channel is not 0. Now both of these sprites can have a rotation around the object's center but no other transformation in case this makes this any easier.
Now the obvious solution I came up with would be this:
calculate the transformation matrix for both
figure out a rough estimation of the area where the code should test (like offset of both + calculated extra space for the rotation)
for all the pixels in the intersecting rectangle, transform the coordinate and test the image at the calculated position (rounded to nearest neighbor) for the alpha channel. Then abort on first hit.
The problem I see with that is that a) there are no matrix classes in JavaScript which means I have to do that in JavaScript which could be quite slow, I have to test for collisions every frame which makes this pretty expensive. Furthermore I have to replicate something I already have to do on drawing (or what canvas does for me, setting up the matrices).
I wonder if I'm missing anything here and if there is an easier solution for collision detection.
I'm not a javascript coder but I'd imagine the same optimisation tricks work just as well for Javascript as they do for C++.
Just rotate the corners of the sprite instead of every pixel. Effectively you would be doing something like software texture mapping. You could work out the x,y position of a given pixel using various gradient information. Look up software texture mapping for more info.
If you quadtree decomposed the sprite into "hit" and "non-hit" areas then you could effectively check to see if a given quad tree decomposition is all "non-hit", "all hit" or "possible hit" (ie contains hits and non-hit pixels. The first 2 are trivial to pass through. In the last case you then go down to the next decomposition level and repeat the test. This way you only check the pixels you need too and for large areas of "non-hit" and "hit" you don't have to do such a complex set of checks.
Anyway thats just a couple of thoughts.
I have to replicate something I already have to do on drawing
Well, you could make a new rendering context, plot one rotated white-background mask to it, set the compositing operation to lighter and plot the other rotated mask on top at the given offset.
Now if there's a non-white pixel left, there's a hit. You'd still have to getImageData and sift through the pixels to find that out. You might be able to reduce that workload a bit by scaling the resultant image downwards (relying on anti-aliasing to keep some pixels non-white), but I'm thinking it's probably still going to be quite slow.
I have to test for collisions every frame which makes this pretty expensive.
Yeah, I think realistically you're going to be using precalculated collision tables. If you've got space for it, you could store one hit/no hit bit for every combination of sprite a, sprite b, relative rotation, relative-x-normalised-to-rotation and relative-y-normalised-to-rotation. Depending on how many sprites you have and how many steps of rotation or movement, this could get rather large.
A compromise would be to store the pre-rotated masks of each sprite in a JavaScript array (of Number, giving you 32 bits/pixels of easily &&-able data, or as a character in a Sring, giving you 16 bits) and && each line of intersecting sprite masks together.
Or, give up on pixels and start looking at eg. paths.
Same problem, an alternative solution. First I use getImageData data to find a polygon that surrounds the sprite. Careful here because the implementation works with images with transparent background that have a single solid object. Like a ship. The next step is Ramer Douglas Peucker Algorithm to reduce the number of vertices in the polygon. I finally get a polygon of very few vertices easy and cheap to rotate and check collisions with the other polygons for each sprite.
http://jsfiddle.net/rnrlabs/9dxSg/
var canvas = document.getElementById("canvas");
var context = canvas.getContext("2d");
var img = document.getElementById("img");
context.drawImage(img, 0,0);
var dat = context.getImageData(0,0,img.width, img.height);
// see jsfiddle
var startPixel = findStartPixel(dat, 0);
var path = followPath(startPixel, dat, 0);
// 4 is RDP epsilon
map1 = properRDP(path.map, 4, path.startpixel.x, path.startpixel.y);
// draw
context.beginPath();
context.moveTo(path.startpixel.x, path.startpixel.x);
for(var i = 0; i < map.length; i++) {
var p = map[i];
context.lineTo(p.x, p.y);
}
context.strokeStyle = 'red';
context.closePath();
context.stroke();