I am trying to make a 3D box with a pattern on each side using the following code but when viewed from certain angles, the back faces disappear when looking through the transparent parts of the forward faces. I was also wondering if it is possible to have a different pattern on each face? Many thanks in advance!
let r = 10
let a = 0
let c = 20
let angle = 0
let art
function setup() {
createCanvas(windowWidth, windowHeight, WEBGL);
art = createGraphics(800, 800)
}
function draw() {
background(0);
let x = r + c * cos(a)
let y = r + c * sin(a)
art.fill(r, a, c)
art.ellipse(x + 400, y + 400, 10, 10)
c += 0.2
a += 1.8
push()
texture(art)
rotateX(angle)
rotateY(angle)
rotateZ(angle)
box(400)
angle += 0.0003
pop()
orbitControl();
}
html, body { margin: 0; overflow: hidden; }
<script src="https://cdnjs.cloudflare.com/ajax/libs/p5.js/1.4.0/p5.js"></script>
This happens because in in WebGL, once a pixels is drawn, regardless of the level of level of transparency of that pixel, if another triangle would draw to that same pixel, but at a further depth, it is discarded (I think the alpha information from the original pixel(s) may no longer be available). In order for transparency to work properly in WebGL it is necessary to draw all triangles in depth order (furthest from the camera first). And even then if two triangles intersect there will still be problems.
In your case because you have many pixels that are completely transparent and others that are completely opaque there is another solution: a custom fragment shader that discards pixels if the texture alpha is below some threshold.
const vert = `
uniform mat4 uModelViewMatrix;
uniform mat4 uProjectionMatrix;
attribute vec3 aPosition;
attribute vec2 aTexCoord;
varying vec2 vTexCoord;
void main() {
vTexCoord = aTexCoord;
vec4 viewModelPosition = uModelViewMatrix * vec4(aPosition, 1.0);
gl_Position = uProjectionMatrix * viewModelPosition;
}`;
const frag = `
precision mediump float;
// ranges from 0..1
varying vec2 vTexCoord;
uniform sampler2D uSampler;
void main() {
vec4 tex = texture2D(uSampler, vTexCoord);
if (tex.a < 0.05) {
discard;
}
gl_FragColor = tex;
}`;
let r = 10
let a = 0
let c = 20
let angle = 0
let art
let discardShader;
function setup() {
createCanvas(windowWidth, windowHeight, WEBGL);
art = createGraphics(800, 800)
discardShader = createShader(vert, frag)
textureMode(NORMAL)
}
function draw() {
background(0);
let x = r + c * cos(a)
let y = r + c * sin(a)
art.fill(r, a, c)
art.ellipse(x + 400, y + 400, 10, 10)
c += 0.2
a += 1.8
push()
noStroke()
texture(art)
shader(discardShader)
rotateX(angle)
rotateY(angle)
rotateZ(angle)
box(400)
angle += 0.0003
pop()
orbitControl();
}
html,
body {
margin: 0;
overflow: hidden;
}
<script src="https://cdnjs.cloudflare.com/ajax/libs/p5.js/1.4.1/p5.js"></script>
Note #1: It is important that you use p5.js v1.4.1 for this to work because prior to that there was a bug that prevented user shaders from working with textures.
Note #2: If your texture had partial opacity then this would not work and instead you would want to render each plane of the box separately and in the correct order (farthest from the camera first).
Related
A similar question has been asked before - but the answers involve OpenCV with Python or C++. My use case requires this to happen in a browser environment.
I am trying to identify the 4 corners of a sheet of paper in a photo, in order to then straighten the sides into a rectangle.
Currently my approach is:
blur and threshold in a shader with WebGL (faked with imagemagick for now, but a known problem)
calculate x and y differentials also in a shader with WebGL
identify the convex hull of points with a differential above a threshold using hull.js
????
straighten the image with glfx.js
where ???? is where I get stuck - going from a convex hull to a quadrangular hull - which would then give me the corners to straighten.
The shader for calculating the differential is here:
void main(){
vec2 cellSize = 1.0 / resolution;
vec2 position = ( gl_FragCoord.xy / resolution.xy );
vec4 color = texture2D(image, position);
vec2 step = 1.0 / resolution.xy;
vec4 rightCol = texture2D(image, position + vec2(step.x, 0.0));
vec4 bottomCol = texture2D(image, position + vec2(0.0, step.y));
float y = 0.299 * color.r + 0.587 * color.g + 0.114 * color.b;
color = vec4(y, y, y, 1.0);
y = 0.299 * rightCol.r + 0.587 * rightCol.g + 0.114 * rightCol.b;
rightCol = vec4(y, y, y, 1.0);
y = 0.299 * bottomCol.r + 0.587 * bottomCol.g + 0.114 * bottomCol.b;
bottomCol = vec4(y, y, y, 1.0);
float thrs = y < 0.5 ? 1.0 : 0.0;
float maxColor = length(color.rgb);
float r = abs(length(-rightCol + color) / step.x);
float g = abs(length(-bottomCol + color) / step.y);
// gl_FragColor.r = r;
// gl_FragColor.g = g;
gl_FragColor.r = abs(dFdx(maxColor));
gl_FragColor.g = abs(dFdy(maxColor));
gl_FragColor.b = 0.0;
gl_FragColor.a = 1.0;
}
These is an example of the images I am trying to process and the steps.
source (blurred for privacy)
blur and threshold
differential
convex hull
Now I am thinking of a brute force, combinatoric approach, trying out all groups of 4 points until I find the largest rectangle.
I've also tried Harris corner detection with the FivekoGFX library but I got too many false positives for it to be useful.
What would be a way to solve the "find the quadrangle" problem? is there anything better than brute force? any pointers to libraries or algorithms would be helpful.
I am first rotating a sprite which has a texture applied, then applying a filter with a fragment shader which causes distortion on the sprite. However, when I add the filter to the sprite, it rotates to normal horizontal position instead of the angled position it had before.
I have tried to apply a rotating function inside the shader to rotate the uv. This rotates the image but changes the image outside the parts that are rotated. Here are some screenshots.
Initial look of the sprite after adding and changing the angle:
How it looks after applying the filter:
As you can see the rotation is removed.
I tried to add a rotation matrix inside the shader, here is the result:
The rotation is correct, but only the texture is rotated and not the actual container.
Applying angle back to sprite does nothing.
The actual result should be first + second image, so that the filter applies on the rotated sprite.
Here is the code that adds the filter to the image:
const filter = new PIXI.Filter(null, getTransitionFragmentShader(transition, 2), uniforms);
filter.apply = function (filterManager, input, output, clear) {
var matrix = new PIXI.Matrix();
this.uniforms.mappedMatrix = filterManager.calculateNormalizedScreenSpaceMatrix(matrix);
PIXI.Filter.prototype.apply.call(this, filterManager, input, output, clear);
};
sprite.filters = [filter];
vec2 rotate(vec2 v, float a) {
float s = sin(a);
float c = cos(a);
mat2 m = mat2(c, -s, s, c);
return m * v;
}
vec4 transition (vec2 p) {
float dt = parabola(progress,1.);
float border = 1.;
vec2 newUV = rotate(p, angle);
vec4 color1 = vec4(0, 0, 0, 0);
if (fromNothing) {
color1 = vec4(0, 0, 0, 0);
} else {
color1 = texture2D(uTexture1, newUV);
}
vec4 color2 = texture2D(uTexture2, newUV);
vec4 d = texture2D(displacement,vec2(newUV.x*scaleX,newUV.y*scaleY));
float realnoise = 0.5*(cnoise(vec4(newUV.x*scaleX + 0.*time/3., newUV.y*scaleY,0.*time/3.,0.)) +1.);
float w = width*dt;
float maskvalue = smoothstep(1. - w,1.,p.x + mix(-w/2., 1. - w/2., progress));
float maskvalue0 = smoothstep(1.,1.,p.x + progress);
float mask = maskvalue + maskvalue*realnoise;
float final = smoothstep(border,border+0.01,mask);
return mix(color1, color2, final);
}
This is the shader code with ommitted functions for brevity.
Thanks!
What I did, was instead use a vertex shader for rotation as follows:
attribute vec2 aVertexPosition;
uniform mat3 projectionMatrix;
varying vec2 vTextureCoord;
uniform vec4 inputSize;
uniform vec4 outputFrame;
uniform vec2 rotation;
vec4 filterVertexPosition( void )
{
vec2 position = aVertexPosition * max(outputFrame.zw, vec2(0.)) + outputFrame.xy;
vec2 rotatedPosition = vec2(
position.x * rotation.y + position.y * rotation.x,
position.y * rotation.y - position.x * rotation.x
);
return vec4((projectionMatrix * vec3(rotatedPosition, 1.0)).xy, 0.0, 1.0);
}
vec2 filterTextureCoord( void )
{
return aVertexPosition * (outputFrame.zw * inputSize.zw);
}
void main(void)
{
gl_Position = filterVertexPosition();
vTextureCoord = filterTextureCoord();
}
Rotation is passed as pair of sine, cosine of angle [sine(radians), cosine(radians)].
When you create a sphere(Actually, It is also apolyhedron) or other polyhedron in WebGL native API, you will get a polyhedron with flat style, and you assign a texture to the polyhedron, It will look ugly with angle between two small face at the polyhedron suface. actually,you can subdivide the surface to get a smooth surface. and is there any other method to smooth the surface of the polyhedron.just look lile as the two picture as below.(the two picture is capture from the blender software)
Here is my code for generating the sphere
function getSphere(r,segment_lat,segment_lon){
var normalData = [];
var vertexData = [];
var textureCoord = [];
var vertexIndex = [],
for (var latNum = 0; latNum <= segment_lat; latNum++) {
var theta = latNum * Math.PI / segment_lat;
var sinTheta = Math.sin(theta);
var cosTheta = Math.cos(theta);
for (var lonNum = 0; lonNum <= segment_lon; lonNum++) {
var phi = lonNum * 2 * Math.PI / segment_lon;
var sinPhi = Math.sin(phi);
var cosPhi = Math.cos(phi);
var x = cosPhi * sinTheta;
var y = cosTheta;
var z = sinPhi * sinTheta;
var u = 1 - (lonNum / segment_lon);
var v = 1 - (latNum / segment_lat);
textureCoord.push(u);
textureCoord.push(v);
vertexData.push(r * x);
vertexData.push(r * y);
vertexData.push(r * z);
}
}
for (var latNum=0; latNum < segment_lat;latNum++) {
for (var lonNum=0; lonNum < segment_lon; lonNum++) {
var first = (latNum * (segment_lon + 1)) + lonNum;
var second = first + segment_lon + 1;
vertexIndex .push(first);
vertexIndex .push(second);
vertexIndex .push(first + 1);
vertexIndex .push(second);
vertexIndex .push(second + 1);
vertexIndex .push(first + 1);
}
}
return {'vertexData':vertexData,'vertexIndex':vertexIndex,'textureCoord':textureCoord,'normalDatas':normalData};
},
Fragment Shader:
precision mediump float;
varying vec2 vTextureCoord;
uniform sampler2D uSampler;
void main(void) {
vec3 light = vec3(1,1,1);
vec4 textureColor = texture2D(uSampler, vec2(vTextureCoord.s, vTextureCoord.t));
gl_FragColor = vec4(textureColor.rgb*light,textureColor.a);
// gl_FragColor = vec4 (1,0,0,.8);
}
Vertex Shader:
attribute vec2 aTextureCoord;
attribute vec3 aVertexPosition;
// uniform mediump mat4 proj_inv;
uniform mediump mat4 modelViewMatrix;
uniform mediump mat4 projectMatrix;
varying highp vec2 vTextureCoord;
void main(void) {
//projectMatrix multi modelViewMatrix must be in vertex shader,or it will be wrong;
gl_Position = projectMatrix*modelViewMatrix*vec4(aVertexPosition, 1.0);
vTextureCoord = aTextureCoord;
}
If I have to guess your rendered result is different than the picture you showed. What you see is a "flat" sphere in one uniform color and you want a shaded sphere, is that correct?
If so, you need to go read tutorials on how lighting works. Basically, the angle between the viewing vector and the fragment's normal is used to determined the brightness of each fragment. A fragment on the sphere that you are staring at directly have a very small angle between the view vector and its normal and thus its bright. A fragment on the barely visible edge on the sphere have a large angle between normal and view and thus it appears dark.
In your sphere generation code, you need to calculate the normals as well and pass that information to the gpu along with the rest. Fortunately for a sphere, normal is easy to calculate: normal = normalize(position - center); or just normalize(position) if center is assumed to be at (0,0,0).
I'm attempting to animate a projectile's trajectory (in the form of a cannon ball) given an angle and initial velocity. I've built the "cannon" in the form of a line and the target I'm aiming for in the form of a box, which I know is elementary but I just want to get the projectile motion down for now. Currently, I'm messing around with hardcoded angles and velocity, but eventually would like to input the angle and velocity and have the cannon shoot following the input. The target is parallel to the launch point, so I know that the x value of the cannon will be (initialVelocity)cos(angle)(time), and the y will be (initialVelocity)sin(angle)(time) - (g*t^2)/2, where g is the length or distance. Currently what I have is a cannon ball moving linearly across the screen, and it doesn't even start in the right spot.
I'm not asking for code to be written for me, I'd just like a starting point as to how to get the cannon to move from the right spot, and to know where I'm going completely wrong. I'm confident I can get it to hit the target if I'm taught how to manipulate the shaders correctly.
Shaders:
<script id="vertex-shader" type="x-shader/x-vertex">
precision mediump float;
attribute vec4 vPosition;
attribute vec4 vColor;
varying vec4 fColor;
uniform float time;
void main()
{
/*old code from manipulating clock hands*/
/* fColor = vColor;
float length = sqrt(vPosition.x*vPosition.x + vPosition.y * vPosition.y);
gl_Position.x = length*cos(theta);
gl_Position.y = length*sin(theta);
gl_Position.z = 0.0;
gl_Position.w = 1.0; */
fColor = vColor;
gl_Position = vPosition;
}
</script>
<script id="background-vertex-shader" type="x-shader/x-vertex">
precision mediump float;
attribute vec4 vPosition;
attribute vec4 vColor;
varying vec4 fColor;
void main()
{
fColor = vColor;
gl_Position = vPosition;
}
</script>
<script id="fragment-shader" type="x-shader/x-fragment">
precision mediump float;
varying vec4 fColor;
void main()
{
gl_FragColor = fColor;
}
</script>
WebGL code:
var gl;
var points = [];
var colors = [];
var cannonpoints = [];
var circlepoints;
var squarepoints;
var baseColors = [
vec3(1.0,0.0,0.0),
vec3(0.0,1.0,0.0),
vec3(0.0,0.0,1.0),
vec3(1.0,1.0,1.0),
vec3(0.0,0.0,0.0)
];
var program;
var backgroundprogram;
var Time;
var thetaLoc;
var angle;
var initialVel;
var vx;
var vy;
var ballX = -0.5;
var ballY = -0.5;
window.onload = function init(){
var canvas = document.getElementById("gl-canvas");
gl = WebGLUtils.setupWebGL(canvas);
if(!gl) {
alert("webGL isn't available");
}
// configuring WebGL
gl.viewport(0,0,
canvas.width,canvas.height);
gl.clearColor(0.0,0.0,1.0,1.0); // set background color to black.
// load the shaders and initialize
// the attrbibute buffers.
program = initShaders(gl, "vertex-shader", "fragment-shader");
backgroundprogram = initShaders(gl, "background-vertex-shader", "fragment- shader");
document.getElementById("shoot").onclick = function() {
velocity = document.getElementById("velocity").value;
angle = document.getElementById("angle").value;
console.log("angle="+angle);
vx = (Math.cos(angle*(Math.PI/180))*velocity);
console.log("vx="+vx);
vy = (Math.sin(angle*(Math.PI/180))*velocity);
console.log("vy="+vy);
}
Time = 0.0;
thetaLoc = gl.getUniformLocation(program,"time");
initBackground();
/******************
initBall(Time,1);
*******************/
initBall(Time);
//render();
setInterval(render, 100);
};
function render(){
gl.clear(gl.COLOR_BUFFER_BIT);
/* draw the circle */
gl.drawArrays(gl.TRIANGLE_FAN,0,circlepoints);
/* draw the square(s) */
gl.drawArrays(gl.TRIANGLES,circlepoints,squarepoints);
//draw the cannon
gl.drawArrays(gl.LINES,circlepoints+squarepoints,2);
//draw the cannon ball
//starting index is the amount of points already drawn
//amount of points for circle + amount of points for square + amount of points for line
var start = circlepoints + squarepoints + 2;
Time += 0.01;
initBall(Time); //,1);
gl.uniform1f(thetaLoc,Time);
//amount of points to draw is length of points array minus the start index
gl.drawArrays(gl.TRIANGLE_FAN,start,points.length-start);
}
function initBall(Time) { //,r){
gl.useProgram(program);
/*******************************************************
filled_circle(vec2(r*Math.cos(Time),r*Math.sin(Time)),0.05,4);*/
vx= (Math.cos(60*(Math.PI/180))*1);
vy= (Math.sin(60*(Math.PI/180))*1);
filled_circle(vec2(-0.8+(vx*Time),-0.3+(vy*Time)),0.05,4);
// Load the data into the GPU
var bufferId = gl.createBuffer();
gl.bindBuffer(gl.ARRAY_BUFFER, bufferId);
gl.bufferData(gl.ARRAY_BUFFER,
flatten(points),
gl.STATIC_DRAW);
// Associate our shader variables with
// the data buffer.
var vPosition = gl.getAttribLocation(program,"vPosition");
gl.vertexAttribPointer(vPosition,2,gl.FLOAT,false,0,0);
gl.enableVertexAttribArray(vPosition);
// load color data to the gpu
var cBuffer = gl.createBuffer();
gl.bindBuffer(gl.ARRAY_BUFFER,
cBuffer);
gl.bufferData(gl.ARRAY_BUFFER,
flatten(colors),
gl.STATIC_DRAW);
var vColor = gl.getAttribLocation(
program, "vColor");
gl.vertexAttribPointer(vColor,3,
gl.FLOAT, false, 0, 0);
gl.enableVertexAttribArray(vColor);
}
I think the easiest way to do it is give your projectile a starting position, velocity and acceleration. Then the position of the projectile at any time is position + velocity * time + acceleration * time * time. The angle of the projectile would just be the angle of the projectile's current velocity.
If you want to eventually adding other stuff like collisions then its probably a good idea to make the projectile track its current velocity and acceleration; and on each frame the position and velocity changes based on the elapsed time between each frame. Like so:
Projectile.prototype.update = function(dt){
this.velocity += this.acceleration * dt;
this.position += this.velocity * dt;
this.angle = getAngle(this.velocity);
};
And on each frame, call projectile.update(dt) where dt = currentFrameTime - lastFrameTime.
I'm trying to rewrite my canvas-based rendering for my 2d game engine. I've made good progress and can render textures to the webgl context fine, complete with scaling, rotation and blending. But my performance sucks. On my test laptop, I can get 30 fps in vanilla 2d canvas with 1,000 entities on screen at once; in WebGL, I get 30 fps with 500 entities on screen. I'd expect the situation to be reverse!
I have a sneaking suspicion that the culprit is all this Float32Array buffer garbage I'm tossing around. Here's my render code:
// boilerplate code and obj coordinates
// grab gl context
var canvas = sys.canvas;
var gl = sys.webgl;
var program = sys.glProgram;
// width and height
var scale = sys.scale;
var tileWidthScaled = Math.floor(tileWidth * scale);
var tileHeightScaled = Math.floor(tileHeight * scale);
var normalizedWidth = tileWidthScaled / this.width;
var normalizedHeight = tileHeightScaled / this.height;
var worldX = targetX * scale;
var worldY = targetY * scale;
this.bindGLBuffer(gl, this.vertexBuffer, sys.glWorldLocation);
this.bufferGLRectangle(gl, worldX, worldY, tileWidthScaled, tileHeightScaled);
gl.activeTexture(gl.TEXTURE0);
gl.bindTexture(gl.TEXTURE_2D, this.texture);
var frameX = (Math.floor(tile * tileWidth) % this.width) * scale;
var frameY = (Math.floor(tile * tileWidth / this.width) * tileHeight) * scale;
// fragment (texture) shader
this.bindGLBuffer(gl, this.textureBuffer, sys.glTextureLocation);
this.bufferGLRectangle(gl, frameX, frameY, normalizedWidth, normalizedHeight);
gl.drawArrays(gl.TRIANGLES, 0, 6);
bufferGLRectangle: function (gl, x, y, width, height) {
var left = x;
var right = left + width;
var top = y;
var bottom = top + height;
gl.bufferData(gl.ARRAY_BUFFER, new Float32Array([
left, top,
right, top,
left, bottom,
left, bottom,
right, top,
right, bottom
]), gl.STATIC_DRAW);
},
bindGLBuffer: function (gl, buffer, location) {
gl.bindBuffer(gl.ARRAY_BUFFER, buffer);
gl.vertexAttribPointer(location, 2, gl.FLOAT, false, 0, 0);
},
And here's my simple test shaders (these are missing blending, scaling & rotation):
// fragment (texture) shader
precision mediump float;
uniform sampler2D image;
varying vec2 texturePosition;
void main() {
gl_FragColor = texture2D(image, texturePosition);
}
// vertex shader
attribute vec2 worldPosition;
attribute vec2 vertexPosition;
uniform vec2 canvasResolution;
varying vec2 texturePosition;
void main() {
vec2 zeroToOne = worldPosition / canvasResolution;
vec2 zeroToTwo = zeroToOne * 2.0;
vec2 clipSpace = zeroToTwo - 1.0;
gl_Position = vec4(clipSpace * vec2(1, -1), 0, 1);
texturePosition = vertexPosition;
}
Any ideas on how to get better performance? Is there a way to batch my drawArrays? Is there a way to cut down on the buffer garbage?
Thanks!
There's two big issues I can see here that will adversely affect your performance.
You're creating a lot of temporary Float32Arrays, which are currently expensive to construct (That should get better in the future). It would be far better in this case to create a single array and set the vertices each time like so:
verts[0] = left; verts[1] = top;
verts[2] = right; verts[3] = top;
// etc...
gl.bufferData(gl.ARRAY_BUFFER, verts, gl.STATIC_DRAW);
The bigger issue by far, however, is that you're only drawing a single quad at a time. 3D APIs simply aren't designed to do this efficiently. What you want to do is try and squeeze as many triangles as possible into each drawArrays/drawElements call you make.
There's several ways to do that, the most straightforward being to fill up a buffer with as many quads as you can that share the same texture, then draw them all in one go. In psuedocode:
var MAX_QUADS_PER_BATCH = 100;
var VERTS_PER_QUAD = 6;
var FLOATS_PER_VERT = 2;
var verts = new Float32Array(MAX_QUADS_PER_BATCH * VERTS_PER_QUAD * FLOATS_PER_VERT);
var quadCount = 0;
function addQuad(left, top, bottom, right) {
var offset = quadCount * VERTS_PER_QUAD * FLOATS_PER_VERT;
verts[offset] = left; verts[offset+1] = top;
verts[offset+2] = right; verts[offset+3] = top;
// etc...
quadCount++;
if(quadCount == MAX_QUADS_PER_BATCH) {
flushQuads();
}
}
function flushQuads() {
gl.bindBuffer(gl.ARRAY_BUFFER, vertsBuffer);
gl.bufferData(gl.ARRAY_BUFFER, verts, gl.STATIC_DRAW); // Copy the buffer we've been building to the GPU.
// Make sure vertexAttribPointers are set, etc...
gl.drawArrays(gl.TRIANGLES, 0, quadCount + VERTS_PER_QUAD);
}
// In your render loop
for(sprite in spriteTypes) {
gl.bindTexture(gl.TEXTURE_2D, sprite.texture);
for(instance in sprite.instances) {
addQuad(instance.left, instance.top, instance.right, instance.bottom);
}
flushQuads();
}
That's an oversimplification, and there's ways to batch even more, but hopefully that gives you an idea of how to start batching your calls for better performance.
If you use WebGL Inspector you'll see in the trace if you do any unnecessary GL instructions (they're marked with bright yellow background). This might give you an idea on how to optimize your rendering.
Generally speaking, sort your draw calls so all using the same program, then attributes, then textures and then uniforms are done in order. This way you'll have as few GL instructions (and JS instructions) as possible.