I'm using threejs to render around 2 000 000 points by using PointClouds. I would like to make each point move. To do that, I have the beginning position and the end position.
So, I'm looking for the best way to do it. And three ideas came up :
Do a custom shader, where I modify the position of each vertex following this function : f(t) = beginning*(1-t)+end*t. With t varying from 0 to 1.
But I can't send so big arrays to the shader (2 millions vec3 to know end position). Is it possible for each vertex to send the specific end vec3 ? So I would have : uniform vec3 end, instead of uniform vec3 end[2097152].
precalculate the position and send the result to shader.
precalculate the position by using web workers and then send the result to the shader.
Does anyone have any idea on how I could do my animation ?
I'm really new to the all the shader stuff and I don't fully understand web workers yet. So I might be wrong with what I say. Don't hesitate to make me aware of it :P
Thanks !
edit: dunno why, but stackoverflow doesn't want me to say Hi at the beginning of the message. Hi !
editV2 : Executing the moving code on the main thread.
var moving = setInterval(function()
{
var time = t;
var iTime = 1-time;
for(var i = 0; i < 2097152;i++)
{
scene.children[1].geometry.attributes.position.array[i*3] = before[i].x*iTime+after[i].x*time;
scene.children[1].geometry.attributes.position.array[i*3+1] = before[i].y*iTime+after[i].y*time;
scene.children[1].geometry.attributes.position.array[i*3+2] = before[i].z*iTime+after[i].z*time;
}
scene.children[1].geometry.attributes.position.needsUpdate = true;
t+=1/60;
if(t>1)
{
clearInterval(moving);
}
},5);
editV3 :
As WestLangley explained me, I need to use attributes so my code looks like that :
Vertex shader :
uniform float size;
uniform float t;
attribute vec3 endPosition;
void main() {
vec4 mvPosition = modelViewMatrix * vec4( position*(1.0-t)+endPosition*t, 1.0 );
gl_PointSize = size / length( mvPosition.xyz );
gl_Position = projectionMatrix * mvPosition;
}
And I add the attributes with the BufferGeometry :
geometry.addAttribute('position' , new THREE.BufferAttribute( positionArray, 3 ));
geometry.addAttribute('endPosition' , new THREE.BufferAttribute( endPositionArray, 3 ));
It's moving, but it's just getting smaller and smaller. It seems that endPosition*t is equal to 0 for the vertex shader.
Do I have a wrong function ? or did I miss something with the declaration of endPosition ?
I'm quite sure endPosition must not be a vec3, but something with buffer, but for the moment I didn't found it.
Thanks for your help :)
EditV4:
Ok I found where my problem was, I needed to specify the shaderMaterial that I wanted to add :
var attributes = {
endPosition: { type:'v3', value:null }
}
As position already existed and was functionning, I didn't pay attention to that fact.
Thanks everyone ! Have a nice day.
Related
I'm a beginner at shaders and WebGL and I've taken some shortcuts to develop what I currently have, so please bear with me.
Is there a way to update attribute buffer data within the GPU only? Basically what I want to do is to send in three buffers t0, t1, t2 into the GPU representing points and their position in time 0, 1, and 2 respectively. Then I wish to update their new position tn depending on the properties of t2, t1, and t0 depending on the velocity of the points, turning angle, and so on.
My current implementation updates the positions in javascript and then copies the buffers into WebGL at every draw. But why? This seems terribly inefficient to me, and I don't see why I couldn't do everything in the shader to skip moving data from CPU->GPU all the time. Is this possible somehow?
This is current vertex shader which sets color on the point depending on the turn direction and angle it's turning at (tn is updated in JS atm by debugging functions):
export const VsSource = `
#define M_PI 3.1415926535897932384626433832795
attribute vec4 t0_pos;
attribute vec4 t1_pos;
attribute vec4 t2_pos;
varying vec4 color;
attribute vec4 r_texture;
void main() {
float dist = distance(t1_pos, t2_pos);
vec4 v = normalize(t1_pos-t0_pos);
vec4 u = normalize(t2_pos-t1_pos);
float angle = acos(dot(u, v));
float intensinty = angle / M_PI * 25.0;
float turnDirr = (t0_pos.y-t1_pos.y) * (t2_pos.x-t1_pos.x) + (t1_pos.x-t0_pos.x) * (t2_pos.y-t1_pos.y);
if(turnDirr > 0.000000001 ) {
color = vec4(1.0, 0.0, 0.0, intensinty);
} else if( turnDirr < -0.000000001 ) {
color = vec4(0.0, 0.0, 1.0, intensinty);
} else {
color = vec4(1.0, 1.0, 1.0, 0.03);
}
gl_Position = t2_pos;
gl_PointSize = 50.0;
}
`;
What I want to do is to update the position gl_Position (tn) depending on these properties, and then somehow shuffle/copy the buffers tn->t2, t2->t1, t1->t0 to prepare for another cycle, but all within the vertex shader (not only for the efficiency, but also for some other reasons which are unrelated to the question but related to the project I'm working on).
Note, your question should probably be closed as a duplicate since how to write output from a vertex shader is already covered but just to add some notes relevant to your question...
In WebGL1 is it not possible to update buffers in the GPU. You can instead store your data in a texture and update a texture. Still, you can not update a texture from itself
pos = pos + vel // won't work
But you can update another texture
newPos = pos + vel // will work
Then next time pass the texture called newPos as pos and visa versa
In WebGL2 you can use "transformFeedback" to write the output a vertex shader (the varyings) to a buffer. It has the same issue that you can not write back to a buffer you are reading from.
There is an example of writing to a texture and also an example of writing to a buffer using transformfeedback in this answer
Also an example of putting vertex data in a texture here
There is an example of a particle system using textures to update the positions in this Q&A
It seems that when using WebGL uniform3fv you MUST pass an array of length equal to 3 if your shader uniform is vec3 OR an array having length equal to multiple of 3 if your shader uniform is an array of vec3's.
Doing this:
var data = new Float32Array([ 1, 2, 3, 4 ]);
gl.uniform3fv( uniformLocation, data );
when your uniform is declared as:
uniform vec3 some_uniform;
will result in some_uniform getting (0,0,0) value.
I searched the web and SO and MDN and forums and stuff (one of the stuff being WebGL specification) and I can't find a requirement (or mention) for this limitation.
My question is: is this required by WebGL specification (and if yes, can you please point me to it) or is it just some undocumented behaviour you are supposed to know about?
If it is required, we'll change code to support it as requirement, if it is undocumented quirk, we'll change code to support that quirk with an option to disable/remove the support once the quirk is gone.
From the WebGL spec section 5.14.10
If the array passed to any of the vector forms (those ending in v) has an invalid length, an INVALID_VALUE error will be generated. The length is invalid if it is too short for or is not an integer multiple of the assigned type.
Did you check your JavaScript console? When I tried it I clearly saw the INVALID_VALUE error in the console.
The code below
const gl = document.querySelector('canvas').getContext('webgl');
const vs = `
attribute vec4 position;
void main() {
gl_Position = position;
}
`;
const fs = `
precision mediump float;
uniform vec3 color;
void main() {
gl_FragColor = vec4(color, 1);
}
`;
const program = twgl.createProgram(gl, [vs, fs]);
const loc = gl.getUniformLocation(program, 'color');
gl.useProgram(program);
gl.uniform3fv(loc, [1, 2, 3, 4]); // generates INVALID_VALUE
<script src="https://twgljs.org/dist/4.x/twgl.min.js"></script>
<canvas></canvas>
prints
in the JavaScript console
In other words, your uniform was not set to 0,0,0. Instead your gl.uniform3fv function failed to execute since it got an error and so the uniform was left at whatever value it already was. Uniforms default to 0 so that's where the 0s came from
If you want to catch this kind of error during debugging consider using this helper library. Generally I find just looking at the JavaScript console is enough for me to figure out where the issue though.
I want to be able to append multiple light sources to each node of my scene graph, but I have no clue how to do it!
From the tutorials on learningwebgl.com I learned to either use directional or position lighting, but I couldn't find a good explanation of how to implement multiple light sources.
So, the objective should be, to have the option to append an arbitrary number of light sources to each node, which type can be either directional or position lighting, and if possible and advisable, this should be realized by using only one shader-program (if this isn't the only possibility to do it anyway), because I automatically create the program for each node depending on their specific needs (unless there is already a program on the stack with equal settings).
Based on the tutorials on learningwebgl.com, my fragment-shader source for a node object using lighting without preset binding to one of the lighting-types could look like this...
precision highp float;
uniform bool uUsePositionLighting;
uniform bool uUseDirectionalLighting;
uniform vec3 uLightPosition;
uniform vec3 uLightDirection;
uniform vec3 uAmbientColor;
uniform vec3 uDirectionalColor;
uniform float uAlpha;
varying vec4 vPosition;
varying vec3 vTransformedNormal;
varying vec3 vColor;
void main (void) {
float directionalLightWeighting;
if (uUseDirectionalLighting) {
directionalLightWeighting = max(dot(vTransformedNormal, uLightDirection), 0.0);
else if (uUsePositionLighting) {
vec3 lightDirection = normalize(uLightPosition, vPosition.xyz);
directionalLightWeighting = max(dot(normalize(vTransformedNormal, lightDirection), 0.0);
}
vec3 lightWeighting = uAmbientColor + uDirectionalColor * directionalLightWeighting;
gl_FragColor = vec4(vColor * lightWeighting, uAlpha);
}
...so, that's basically my poor state of knowledge concerning this subject.
I also ask myself, how adding more light sources would affect the lighting-colors:
I mean, do uAmbientColor and uDirectionalColor have to sum up to 1.0? In this case (and particularly when using more than one light source) it surely would be good to precalculate these values before passing them to the shader, wouldn't it?
Put your lights into an array and loop over them for each fragment. Start with a fixed array of light sources, unbounded arrays are not supported until OpenGL 4.3 and are more complicated to work with.
Something along the lines of:
uniform vec3 uLightPosition[16];
uniform vec3 uLightColor[16];
uniform vec3 uLightDirection[16];
uniform bool uLightIsDirectional[16];
....
void main(void) {
vec3 reflectedLightColor;
// Calculate incoming light for all light sources
for(int i = 0; i < 16; i++) {
vec3 lightDirection = normalize(uLightPosition[i], vPosition.xyz);
if (lightIsDirectional[i]) {
reflectedLightColor += max(dot(vTransformedNormal, uLightDirection[i]), 0.0) * uLightColor[i];
}
else {
reflectedLightColor += max(dot(normalize(vTransformedNormal, lightDirection), 0.0) * uLightColor[i];
}
}
glFragColor = vec4(uAmbientColor + reflectedLightColor * vColor, uAlpha);
}
Then you can enable/disable the light sources by setting uLightColor to (0,0,0) for the entries you don't use.
Ambient and directional don't have to sum up to 1, actually the light source can have a strength much stronger than 1.0, but then you will need to do tonemapping to get back to a range of values that can be displayed on a screen, I would suggest playing around to get a feel for what is happening (e.g. what happens when a light source have negative colors, or colors above 1.0?).
uAmbientColor is just a (poor) way to simulate light that has bounced several times in the scene. Otherwise things in shadow becomes completely black Which looks unrealistic.
Reflectance should typically be between 0 and 1 (in this example it would be the parts returned by the 'max' computations), otherwise a lightsource will get stronger when looked at via the material.
#ErikMan's answer is great, but may involve a lot of extra work on the part of the GPU since you're checking every light per fragment, which isn't strictly necessary.
Rather than an array, I'd suggest building a clip-space quadtree. (You can do this in a compute shader if it's supported by your target platform / GL version.)
A node might have a structure such as (pseudocode as my JS is rusty):
typedef struct
{
uint LightMask; /// bitmask - each light has a bit indicating whether it is active for this node. uint will allow for 32 lights.
bool IsLeaf;
} Node;
const uint maxLevels = 4;
const uint maxLeafCount = pow(4,maxLevels);
const uint maxNodeCount = (4 * maLeafCount - 1) / 3;
/// linear quadtree - node offset = 4 * parentIndex + childIndex;
Node tree[maxNodeCount];
When building the tree, just check each light's clip-space bounding box against the implicit node bounds. (Root goes from (-1,-1) to (+1,+1). Each child is half that size on each dimension. So, you don't really need to store node bounds.)
If the light touches the node, set a bit in Node.LightMask corresponding to the light. If the light completely contains the node, stop recursing. If it intersects the node, subdivide and continue.
In your fragment shader, find which leaf node contains your fragment, and apply all lights whose bit is set in the leaf node's mask.
You could also store your tree in a mipmap pyramid if you expect it to be dense.
Keep your tiles to a size that is a multiple of 32, preferably square.
vec2 minNodeSize = vec2(2.f / 32);
Now, if you have a small number of lights, this may be overkill. You would probably have to have a lot of lights to see any real performance benefit. Also, a normal loop may help reduce data divergence in your shader, and makes it easier to eliminate branching.
This is one way to implement a simple tiled renderer, and opens the door to having hundreds of lights.
I've created ~2500 meshes and use an algorithm defining the color of each mesh. The algorithm goes through all meshes and adds a value depending on the distance it has to each "red-start" point. The value then decides what the color should be.
This is the result:
It is lagging and the corners aren't smooth. I want to recreate the same color result in some other way but can't figure out how. How can you do it with THREE.Shape and FragmentShader?
Final Goal Description:
Using one, for increase in FPS, mesh (THREE.Shape) that
defines the area which is to be colored.
Be able to insert X amount of points that acts as positions where RED color is MAX and the further away from it you get it should go from RED -> GREEN
You should be able to move the points
Parts of the mesh that are in between 2 or more points should turn to a color depending on the distance it has to each point.
EDIT:
Here is my jsfiddle of how far I've gotten.
http://jsfiddle.net/zDh4y/9/
EDIT SOLUTION:
http://jsfiddle.net/zDh4y/13/
I have solved it ^^
Much smoother, faster and easier!
Main issue with my algorithm was the distance was in 'millimeter' when it should have been in 'm'.
dist = dist / (T * T * T);
Check it out here:
http://jsfiddle.net/zDh4y/13/
Edit: Now it's pretty and on WebGL http://glsl.heroku.com/e#16831.0 (Thanks to the gman)
It blends point color to the base color of the quad based on the distance between the point and the current fragment.
uniform vec2 pointA;
uniform vec2 pointB;
uniform vec2 pointC;
uniform vec4 pointColor;
uniform vec4 baseColor;
varying vec2 texCoord;
float blendF(float val){
return pow(val, 1.2) * 5;
}
vec4 addPoint(vec4 base, vec2 pointTexCord, vec4 pointColor){
return mix(pointColor, base, blendF(distance(pointTexCord, texCoord)));
}
void main(void)
{
vec4 accumulator = addPoint(baseColor, pointA, pointColor);
accumulator = addPoint(accumulator, pointB, pointColor);
accumulator = addPoint(accumulator, pointC, pointColor);
gl_FragColor = accumulator;
}
It would work with any kind of geometry and be as fast as it gets.
Here's Rendermonkey file with this shader You can tinker with it and don't worry about OpenGL\WebGL stuff - only shaders.
I would like to cut an object (a box) in WebGL (fragment shaders / vertex shaders) without using Boolean operations (union, difference, etc..).
I want to use shaders to hide some part of the object (so it is therefore not really a "real cuts" since it simply hides the object).
EDIT
First, make sure that the vertex shader passes through to the fragment shader the position in world space (or rather, whichever coordinate space you wish the clipping to be fixed relative to). Example (written from memory, not tested):
varying vec3 positionForClip;
...
void main(void) {
...
vec4 worldPos = modelMatrix * vertexPosition;
positionForClip = worldPos.xyz / worldPos.w; // don't need homogeneous coordinates, so do the divide early
gl_Position = viewMatrix * worldPos;
}
And in your fragment shader, you can then discard based on an arbitrary plane, or any other kind of test you want:
varying vec3 positionForClip;
uniform vec3 planeNormal;
uniform float planeDistance;
...
void main(void) {
if (dot(positionForClip, planeNormal) > planeDistance) {
// or if (positionForClip.x > 10.0), or whatever
discard;
}
...
gl_FragColor = ...;
}
Note that using discard may cause a performance reduction as the GPU cannot optimize based on knowing that all fragments will be written.
Disclaimer: I haven't researched this myself, and only just wrote down a possible way to do it based on the 'obvious solution'. There may be better ways I haven't heard of.
Regarding your question about multiple objects: There are many different ways to handle this — it's all custom code in the end. But you certainly can use a different shader for different objects in your scene, as long as they're in different vertex arrays.
gl.useProgram(programWhichCuts);
gl.drawArrays();
gl.useProgram(programWhichDoesNotCut);
gl.drawArrays();
If you're new to using multiple programs, it's pretty much just like using one program except that you do all the setup (compile, attach, link) once. The main thing to watch out for is each program has its own uniforms, so you have to initialize your uniforms for each program separately.