I have been playing with GPUComputationRenderer on a modified version of this three.js example which modifies the velocity of interacting boids using GPU shaders to hold, read and manipulate boid position and velocity data.
I have got to a stage where I can put GPU computed data (predicted collision times) into the texture buffer using the shader. But now I want to read some of that texture data inside the main javascript animation script (to find the earliest collision).
Here is the relevant code in the render function (which is called on each animation pass)
//... GPU calculations as per original THREE.js example
gpuCompute.compute(); //... gpuCompute is the gpu computation renderer.
birdUniforms.texturePosition.value = gpuCompute.getCurrentRenderTarget( positionVariable ).texture;
birdUniforms.textureVelocity.value = gpuCompute.getCurrentRenderTarget( velocityVariable ).texture;
var xTexture = birdUniforms.texturePosition.value;//... my variable, OK.
//... From http://zhangwenli.com/blog/2015/06/20/read-from-shader-texture-with-threejs/
//... but note that this reads from the main THREE.js renderer NOT from the gpuCompute renderer.
//var pixelBuffer = new Uint8Array(canvas.width * canvas.height * 4);
//var gl = renderer.getContext();
//gl.readPixels(0, 0, canvas.width, canvas.height, gl.RGBA, gl.UNSIGNED_BYTE, pixelBuffer);
var pixelBuffer = new Uint8Array( WIDTH * WIDTH * 4); //... OK.
//var gl = gpuCompute.getContext();//... no getContext function!!!
//... from Nick Whaley here: http://stackoverflow.com/questions/13475209/three-js-get-data-from-three-webglrendertarget
//WebGLRenderer.readRenderTargetPixels ( renderTarget, x, y, width, height, buffer )
gpuCompute.readRenderTargetPixels ( xTexture, 0, 0, WIDTH, WIDTH, pixelBuffer ); //... readRenderTargetPixels is not a function!
As shown in the code I was "wanting" the gpuCompute renderer object to provide functions such as .getContext() or readRenderTargetPixels() but they do not exist for gpuCompute.
EDIT:
Then I tried adding the following code:-
//... the WebGLRenderer code is included in THREE.js build
myWebglRenderer = new THREE.WebGLRenderer();
var myRenderTarget = gpuCompute.getCurrentRenderTarget( positionVariable );
myWebglRenderer.readRenderTargetPixels (
myRenderTarget, 0, 0, WIDTH, WIDTH, pixelBuffer );
This executes OK but pixelBuffer remains entirely full of zeroes instead of the desired position coordinate values.
Please can anybody suggest how I might read the texture data into a pixel buffer? (preferably in THREE.js/plain javascript because I am ignorant of WebGL).
This answer is out of date. See link at bottom
The short answer is it won't be easy. In WebGL 1.0 there is no easy way to read pixels from floating point textures which is what GPUComputationRenderer uses.
If you really want to read back the data you'll need to render the GPUComputationRenderer floating point texture into an 8bit RGBA texture doing some kind of encoding from 32bit floats to 8bit textures. You can then read that back in JavaScript and look at the values.
See WebGL Read pixels from floating point render target
Sorry for the long delay. I've not logged in in SO for a long time.
In the example of water with tennis balls,
https://threejs.org/examples/?q=water#webgl_gpgpu_water
The height of the water at the balls positions is read back from the GPU.
An integer 4-component texture is used to give a 1-component float texture.
The texture has 4x1 pixels, where the first one is the height and the other 2 are the normal of the water surface (the last pixel is not used)
This texture is computed and read back for each one of the tennis balls, and in CPU the ball physics is performed.
Related
I've got a strange issue regarding the automatic resizing of textures by WebGLRenderer using threejs.
I know that WebGL requires the sizes of textures* to be the power of 2.
*-the textures that are using non-LinearFilter, or have wrapping not as a clamp set
My texture has wrap set as RepeatWrapping and the sie of the texture is
65536 x 512 so this is 2^16 x 2^9
I'm assuming that the size of the texture is correct. However, the console says:
THREE.WebGLRenderer: Texture has been resized from (65536x512) to (16384x128)
It's very bad that the texture is downsized because it's very visible on the quality of rendered texture.
I don't really know what I'm doing wrong. According to the documentation, everything is set correctly.
Is there a possibility to prevent downsizing?
I don't think that's helpful but I'm also including the code of loading textures
const texture = new TextureLoader().load(path);
texture.anisotropy = 2;
texture.magFilter = LinearFilter;
texture.minFilter = LinearFilter;
texture.wrapS = RepeatWrapping;
texture.wrapT = RepeatWrapping;
texture.repeat.set(1 / tilesAmountHorizontally, 1 / tilesAmountVertically);
So you can find out what the maximum size of the device supports
var gl = document.getElementById( "my-canvas").getContext(
"experimental-webgl" ); alert(gl.getParameter(gl.MAX_TEXTURE_SIZE))
the texture can be divided and applied in parts
I'm trying to learn how to take advantage of gpu possibilities for threejs and webgl stuff so im just analysing code to get some patterns, methods how things are done and I need some code explanation.
I found this example: One million particles, which seems to be the easiest one involving calculations made in shaders and spit back out.
So from what I have figured out:
- Data for velocity and position of particles are kept in textures passed to shaders to perform calculations there, and get them back for update
Particles are created randomly on the plane no more than the texture size ?
for (var i = 0; i < 1000000; i++) {
particles.vertices.push(new THREE.Vector3((i % texSize)/texSize,
Math.floor(i/texSize)/texSize , 0))
;
}
I don't see any particles position updates? How is the data from shaders retrieved and updates each particle?
pick()
only passes the mouse position to calculate the direction of particles movement?
why are there 2 buffers? and 8 (4 pairs of fragment and vector) shaders? Is only the one for calculating velocity and position not enough?
how does the shader update the texture? I just see reading from it not writing to it?
Thanks in advance for any explanations!
How the heck have they done that:
In this post, I'll explain how this results get computed nearly solely on the gpu via WebGL/Three.js - it might look a bit sloppy as I'm using integrated graphics of an Intel i7 4770k:
Introduction:
Simple idea to keep everything intra-gpu: Each particle's state will be represented by one texture pixel color value. One Million particles will result in 1024x1024 pixel textures, one to hold the current position and another one that holds the velocities of those particles.
Nobody ever forbid to abuse the RGB color values of a texture for completely different data of 0...255 universe. You basically have 32-bit (R + G + B + alpha) per texture pixel for whatever you want to save in GPU memory. (One might even use multiple texture pixels if he needs to store more data per particle/object).
They basically used multiple shaders in a sequential order. From the source code, one can identify these steps of their processing pipeline:
Randomize particles (ignored in this answer) ('randShader')
Determine each particles velocity by its distance to mouse location ('velShader')
Based on velocity, move each particle accordingly ('posShader')
Display the screen ('dispShader')**
.
Step 2: Determining Velocity per particle:
They call a draw process on 1 Million points which's output will be saved as a texture. In the vertex shader each fragment gets 2 additional varyings named "vUv", which basically determine the x and y pixel positions inside the textures used in the process.
Next step is its fragment shader, as only this shader can output (as RGB values into the framebuffer, which gets converted to a texture buffer afterwards - all happening inside gpu memory only). You can see in the id="velFrag" fragment shader, that it gets an input variable called uniform vec3 targetPos;. Those uniforms are set cheaply with each frame from the CPU, because they are shared among all instances and don't involve large memory transfers. (containing the mouse coordinate, in -1.00f to +1.00f universe probably - they probably also update mouse coords once every FEW frames, to lower cpu usage).
Whats going on here? Well, that shader calculates the distance of that particle to the mouse coordinate and depending on that it alter that particles velocity - the velocity also holds information about the particles flight direction. Note: this velocity step also makes particles gain momentum and keep flying/overshooting mouse position, depending on gray value.
.
Step 3: Updating positions per particle:
So far each particle got a velocity and an previous position. Those two values will get processed into a new position, again being outputted as a texture - this time into the positionTexture. Until the whole frame got rendered (into default framebuffer)and then marked as the new texture, the old positionTexture remains unchanged and can get read with ease:
In id="posFrag" fragment shader, they read from both textures (posTexture and velTexture) and process this data into a new position. They output the x and y position coordinates into the colors of that texture (as red and green values).
.
Step 4: Prime time (=output)
To output the results, they probably took again a million points/vertexes and gave it the positionTexture as an input. Then the vertex shader sets the position of each point by reading the texture's RGB value at location x,y (passed as vertex attributes).
// From <script type="x-shader/x-vertex" id="dispVert">
vec3 mvPosition = texture2D(posTex, vec2(x, y)).rgb;
gl_PointSize = 1.0;
gl_Position = projectionMatrix * modelViewMatrix * vec4(mvPosition,1.0);
In the display fragment shader, they only need to set a color (note the low alpha, causing it to allow 20 particles to stack up to fully light up a pixel).
// From <script type="x-shader/x-fragment" id="dispFrag">
gl_FragColor = vec4(vec3(0.5, 1.0, 0.1), 0.05);
.
I hope this made it clear how this little demo works :-) I am not the author of that demo, though. Just noticed this answer actually became a super duper detailed one - fly through the thick keywords to get the short version.
Is there a way to create a Three.js 3D line series with width and thickness?
Even though the Three.js line object supports linewidth, this attribute is not yet supported in all browsers on all platforms in WebGL.
Here's where you set linewidth in Three.js:
var material = new THREE.LineBasicMaterial({
color: 0xff0000,
linewidth: 5
});
The Three.js ribbon object - which had width - has recently been dropped.
The Three.js tube object generates 3D extrusions but - being Bezier-based - the lines do not pass through the control points.
Can anybody think of a method of drawing a line series (polylines, plotlines) in Three.js that has some sort of user definable 'bulk' such as width, thickness or radius?
This question may be a restating of this question:
Extruding a graph in three.js.
Given that I do not think that there is a readily available method, I would be happy to participate in an effort to create a simple function that responds to this question.
But a response that points to an existing workable method would be cool...
As WestLangley suggests, one possible solution includes the polyline being of constant pixel width - as is currently available with the Three.js canvas renderer.
A comparison of the two renderers is shown here:
Canvas and WebGL Lines Compared via GitHub Pages
Canvas and WebGL Lines Compared via jsFiddle
A solution where you could specify linewidth and similar results occurred on both renderers would be very cool.
There are, however, other ways of thinking of 3D lines where lines have actual physical constructs. They cast shadows, they respond to events. These also need to be looked into.
Here are links to GitHub Pages with two demos of lines made up of multiple meshes:
Sphere and Cylinder Polylines
An 'expensive solution. Each joint is made up of a full sphere.
Cubes Polylines
My guess is that building either of these as smooth single meshes will be complex to problems to solve. So in the meantime here is a link to a partial visualization of 3D lines that are wide and have height:
3D Box Line on jsFiddle
The goal is have to code 'with a low level of complexity - in other words - for dummies'. Thus a 3D line should be as easy and as familiar as adding a sphere or cube. Geometry + material = mesh > scene. And the geometry should be quite economical in terms of creating vertices and faces.
The lines should have width and height. Up is always in the Y direction. The demo shows this. What the demo does not show is corners being mitred nicely...
I cooked up a possible solution which I believe meets most of your requirements:
http://codepen.io/garciahurtado/pen/AGEsf?editors=001
The concept is fairly simple: render any arbitrary geometry in "wireframe mode", then apply a full screen GLSL shader to it to add thickness to the wireframe lines.
The shader is inspired by the blur shaders in the ThreeJS distro, which essentially copy the image a bunch of times along the horizontal and vertical axis. I automated that process and made the number of copies a user defined parameter, while ensuring that the copies were offset by 1 pixel.
I used a 3D cube mesh in my demo (with an ortho camera), but it should be trivial to convert it to a poly line.
The real meat and potatoes of this thing is in the custom shader (fragment shader portion):
uniform sampler2D tDiffuse;
uniform int edgeWidth;
uniform int diagOffset;
uniform float totalWidth;
uniform float totalHeight;
const int MAX_LINE_WIDTH = 30; // Needed due to weird limitations in GLSL around for loops
varying vec2 vUv;
void main() {
int offset = int( floor(float(edgeWidth) / float(2) + 0.5) );
vec4 color = vec4( 0.0, 0.0, 0.0, 0.0);
// Horizontal copies of the wireframe first
for (int i = 0; i < MAX_LINE_WIDTH; i++) {
float uvFactor = (float(1) / totalWidth);
float newUvX = vUv.x + float(i - offset) * uvFactor;
float newUvY = vUv.y + (float(i - offset) * float(diagOffset) ) * uvFactor; // only modifies vUv.y if diagOffset > 0
color = max(color, texture2D( tDiffuse, vec2( newUvX, newUvY ) ));
// GLSL does not allow loop comparisons against dynamic variables. Workaround below
if(i == edgeWidth) break;
}
// Now we create the vertical copies
for (int i = 0; i < MAX_LINE_WIDTH; i++) {
float uvFactor = (float(1) / totalHeight);
float newUvX = vUv.x + (float(i - offset) * float(-diagOffset) ) * uvFactor; // only modifies vUv.x if diagOffset > 0
float newUvY = vUv.y + float(i - offset) * uvFactor;
color = max(color, texture2D( tDiffuse, vec2( newUvX, newUvY ) ));
if(i == edgeWidth) break;
}
gl_FragColor = color;
}
Pros:
No need for additional geometry beyond the line vertices
Line thickness is user definable
A full screen shader should be relatively gentle on the GPU
Can be implemented fully within the WebGL canvas
Cons:
Line thickness is close to pixel perfect on horizontal and vertical edges, but slightly off on diagonal edges. This is due to the algorithm used and is a limitation of the solution. Having said that, for low line thickness and complex geometries, this is barely noticeable with the naked eye.
The joints between lines will show gaps for large enough line thickness. You can play with the Codepen demo to see what I mean. I started to implement a solution to this by adding a second "diagonal pass", but it got a little hairy and I think this would only be an issue for higher line thicknesses (+8 pixels) or extreme line angles. If you are interested in this solution, you can look at the original source to see where I was going with it.
Since this uses a full screen filter, you can only use the WebGL context for displaying objects of this thickness. Showing various line widths would require additional rendering passes.
As a potential solution. You could take your 3d points, then use THREE.Vector3.project method to figure out screen-space coordinates. Then simply use canvas and it's lineTo and moveTo operations. Canvas 2d context does support variable line thickness.
var w = renderer.domElement.innerWidth;
var h = renderer.domElement.innerHeight;
vector.project(camera);
context2d.lineWidth = 3;
var x = (vector.x+1)*(w/2);
var y = h - (vector.y+1)*(h/2);
context2d.lineTo(x,y);
Also, i don't think you can use the same canvas for that, so it would have to be a layer (another canvas) above your gl rendering context canvas.
If you have infrequent camera changes - it is also possible to construct line out of polygons and update it's vertex positions based on camera transform. For orthographic camera this would work best as only rotations would require vertex position manipulation.
Lastly, you could disable canvas clearing and draw your lines several times with offset inside a circle or a box. After that you can re-enable clearing. This would require several extra draw operations, but it's probably the most scalable approach.
The reason lines don't work as you'd expect out of the box is due to how ANGLE works, it's used in Chrome and in Firefox to my knowledge, it emulates OpenGL via DirectX. Guys from ANGLE state that WebGL spec only requires support of line thickness up-to 1, so they do not see it as a bug and don't intend to "fix" it. Line thickness should work on non-windows OSs though, where ANGLE is not used.
I am using the Three.JS library to display a point cloud in a web brower. The point cloud is generated once at start up and no further points are added or removed. But it does need to be rotated, panned and zoomed. I've gone through the tutorial about creating particles in three.js here
Using the example I can create particles that are squares or use an image of a sphere to create a texture. The image is closer to what I want, but is it possible to generate the point clouds without using the image? The sphere geometry for example.
The problem with the image is that when you have thousands of points it seems they sometimes obscure each other around the edges. From what I can gather it seems like the black region in a point's png file blocks the image immediately behind the current point. (But it is transparent to points further behind)
This obscuring of the images is the reason I would like to generate the points using shapes. I have tried replacing particles = new THREE.Geometry() with THREE.SphereGeometry(radius, segments, rings) and tried to change the vertices to spheres.
So my question is. How do I modify the example code so that it renders spheres (or points) instead of squares? Also, is a particle system the most efficient system for my particular case or should I just generate the particles and set their individual positions? As I mentioned I only generate the points once, but then rotate, zoom, pan the points. (I used the TrackBall sample code to get the mouse events working).
Thanks for your help
I don't think rendering a point cloud with spheres is very efficient. You should be able to get away with a particle system and use a texture or a small canvas program to draw a circle.
One of the first three.js sample uses a canvas program, here are the important bits:
var PI2 = Math.PI * 2;
var program = function ( context )
{
context.beginPath();
context.arc( 0, 0, 1, 0, PI2, true );
context.closePath();
context.fill();
};
var particle = new THREE.Particle( new THREE.ParticleCanvasMaterial( {
color: Math.random() * 0x808008 + 0x808080,
program: program
} ) );
Feel free to adapt the code for the WebGL renderer.
Another clever solution I've seen in the examples is using an encoded webm video to store the data and pass that to a GLSL shader which is rendered through a particle system in three.js
If your point cloud comes from a Kinect, these resources might be useful:
DepthCam
KinectJS
When comparing my code to http://threejs.org/examples/#webgl_custom_attributes_particles3
I saw the only difference was:
vec4 outColor = texture2D( texture, gl_PointCoord );
if ( outColor.a < 0.5 ) discard;
gl_FragColor = outColor;
Added to the fragment shader, fixed this problem for me.
It wasn't z fighting because randomly, some corners would overlap distant particles.
material.alphaTest = 0.5 didn't work and turning off depth writes/tests messed up the viewing order.
The problem with the image is that when you have thousands of points
it seems they sometimes obscure each other around the edges. From what
I can gather it seems like the black region in a point's png file
blocks the image immediately behind the current point. (But it is
transparent to points further behind)
You can get rid of the transparency overlapping problem of the underlying square structure by turning
depthTest:false
The problem then is, if you are adding additional objects to the scene the depth-testing will fail and the PointCloud will be rendered in front of the other objects, ignoring the actual order. To get around that you can additionally turn off
depthWrite:false
Is there a way to get the raw pixel data from a WebGL render buffer or frame buffer that is off screen?
I'm using WebGL to do some image processing, e.g. blurring an image, adjusting color, etc.
I'm using frame buffers to render to textures at the full image size, then using that texture to display in the viewport at a smaller size. Can I get the pixel data of a buffer or texture so I can work with it in a normal canvas 2d context? Or am I stuck with changing the viewport to the full image size and grabbing the data with canvas.toDataURL()?
Thanks.
This is very old question, but I have looked for the same in three.js recently. There is no direct way to render to frame buffer, but actually it is done by render to texture (RTT) process. I check the framework source code and figure out the following code:
renderer.render( rttScene, rttCamera, rttTexture, true );
// ...
var width = rttTexture.width;
var height = rttTexture.height;
var pixels = new Uint8Array(4 * width * height); // be careful - allocate memory only once
// ...
var gl = renderer.context;
var framebuffer = rttTexture.__webglFramebuffer;
gl.bindFramebuffer(gl.FRAMEBUFFER, framebuffer);
gl.viewport(0, 0, width, height);
gl.readPixels(0, 0, width, height, gl.RGBA, gl.UNSIGNED_BYTE, pixels);
gl.bindFramebuffer(gl.FRAMEBUFFER, null);
readPixels() should do what you want. Read more in the WebGL spec at http://www.khronos.org/registry/webgl/specs/latest/
Yes, you can read raw pixel data. Set preserveDrawingBuffer as true while getting webgl context and afterwards make use of readPixels by WebGL.
var context = canvasElement.getContext("webgl", {preserveDrawingBuffer: true}
var pixels = new Uint8Array(4 * width * height);
context.readPixels(x, y, width, height, context.RGBA, context.UNSIGNED_BYTE, pixels)