A similar question has been asked before - but the answers involve OpenCV with Python or C++. My use case requires this to happen in a browser environment.
I am trying to identify the 4 corners of a sheet of paper in a photo, in order to then straighten the sides into a rectangle.
Currently my approach is:
blur and threshold in a shader with WebGL (faked with imagemagick for now, but a known problem)
calculate x and y differentials also in a shader with WebGL
identify the convex hull of points with a differential above a threshold using hull.js
????
straighten the image with glfx.js
where ???? is where I get stuck - going from a convex hull to a quadrangular hull - which would then give me the corners to straighten.
The shader for calculating the differential is here:
void main(){
vec2 cellSize = 1.0 / resolution;
vec2 position = ( gl_FragCoord.xy / resolution.xy );
vec4 color = texture2D(image, position);
vec2 step = 1.0 / resolution.xy;
vec4 rightCol = texture2D(image, position + vec2(step.x, 0.0));
vec4 bottomCol = texture2D(image, position + vec2(0.0, step.y));
float y = 0.299 * color.r + 0.587 * color.g + 0.114 * color.b;
color = vec4(y, y, y, 1.0);
y = 0.299 * rightCol.r + 0.587 * rightCol.g + 0.114 * rightCol.b;
rightCol = vec4(y, y, y, 1.0);
y = 0.299 * bottomCol.r + 0.587 * bottomCol.g + 0.114 * bottomCol.b;
bottomCol = vec4(y, y, y, 1.0);
float thrs = y < 0.5 ? 1.0 : 0.0;
float maxColor = length(color.rgb);
float r = abs(length(-rightCol + color) / step.x);
float g = abs(length(-bottomCol + color) / step.y);
// gl_FragColor.r = r;
// gl_FragColor.g = g;
gl_FragColor.r = abs(dFdx(maxColor));
gl_FragColor.g = abs(dFdy(maxColor));
gl_FragColor.b = 0.0;
gl_FragColor.a = 1.0;
}
These is an example of the images I am trying to process and the steps.
source (blurred for privacy)
blur and threshold
differential
convex hull
Now I am thinking of a brute force, combinatoric approach, trying out all groups of 4 points until I find the largest rectangle.
I've also tried Harris corner detection with the FivekoGFX library but I got too many false positives for it to be useful.
What would be a way to solve the "find the quadrangle" problem? is there anything better than brute force? any pointers to libraries or algorithms would be helpful.
Related
I am animating the x,y,z coordinates of vertices in a sphere like manner with horizontal rings around the center as attributes on a THREE.Points() object. It uses a MeshStandardMaterial() and I have then tilted the Points object slightly along the z axis with points.rotation.z = 0.2. Everything is as expected :
When I swap the MeshStandardMaterial() for ShaderMaterial() and rewrite the animation part into a shader the z tilt has gone. I have checked with an axis helper on the Points object and indeed the object is still tilted but it seems the shader animation is now moving the vertices around the x, y, z coordinates of the scene rather than the tilted Points object.
As you can see in the picture the sphere and outer ring of particles are no longer tilted on the same angle as the axis helpers.
Is there a quick fix for this or do I have to change the shader animation and factor in an overall rotation there?
Thanks.
Here is the shader script as requested, however I have tested the principle on a few shader animations from various tutorials I have undertaken and they all behave in the same way, so I'm assuming this is an inherent problem or expected behaviour with shaders:
#define PI 3.1415926535897932384626433832795
uniform float uSize;
attribute float aScale;
attribute vec4 aParticle;
uniform float uTime;
varying vec3 vColor;
void main()
{
/**
* Position
*/
vec4 modelPosition = modelMatrix * vec4(position, 1.0);
/**
* Particle
*/
float moveT = aParticle.g;
float moveS = aParticle.r + uTime * aParticle.b;
float fullRotate = step(360.0, moveS);
moveS = moveS - (fullRotate * 360.0);
float radMoveS = moveS * (PI / 180.0);
float radMoveT = moveT * (PI / 180.0);
modelPosition.x = aParticle.a * cos(radMoveS) * sin(radMoveT); //x
modelPosition.y = aParticle.a * cos(radMoveT); //y
modelPosition.z = aParticle.a * sin(radMoveS) * sin(radMoveT); //z
vec4 viewPosition = viewMatrix * modelPosition;
vec4 projectedPosition = projectionMatrix * viewPosition;
gl_Position = projectedPosition;
/**
* Size
*/
gl_PointSize = uSize * aScale;
//Attenuation
gl_PointSize *= ( 1.0 / - viewPosition.z );
/**
* Color
*/
vColor = color;
}
At the top of your vertex shader, you apply the modelMatrix to your position by multiplication:
vec4 modelPosition = modelMatrix * vec4(position, 1.0);
But then you overwrite any results from that matrix multiplication when you assign each xyz component a new value:
modelPosition.x = aParticle.a * cos(radMoveS) * sin(radMoveT);
This means that what you're seeing is a. Not using position, and b. Not using modelMatrix. You simply have to apply the matrix multpilication after you've assigned the local vertex positions.
vec4 newPosition = vec4(
aParticle.a * cos(radMoveS) * sin(radMoveT), // x
aParticle.a * cos(radMoveT), // y
aParticle.a * sin(radMoveS) * sin(radMoveT), // z
1.0 // w
);
Vec4 modelPosition = modelMatrix * newPosition;
I am trying to make a 3D box with a pattern on each side using the following code but when viewed from certain angles, the back faces disappear when looking through the transparent parts of the forward faces. I was also wondering if it is possible to have a different pattern on each face? Many thanks in advance!
let r = 10
let a = 0
let c = 20
let angle = 0
let art
function setup() {
createCanvas(windowWidth, windowHeight, WEBGL);
art = createGraphics(800, 800)
}
function draw() {
background(0);
let x = r + c * cos(a)
let y = r + c * sin(a)
art.fill(r, a, c)
art.ellipse(x + 400, y + 400, 10, 10)
c += 0.2
a += 1.8
push()
texture(art)
rotateX(angle)
rotateY(angle)
rotateZ(angle)
box(400)
angle += 0.0003
pop()
orbitControl();
}
html, body { margin: 0; overflow: hidden; }
<script src="https://cdnjs.cloudflare.com/ajax/libs/p5.js/1.4.0/p5.js"></script>
This happens because in in WebGL, once a pixels is drawn, regardless of the level of level of transparency of that pixel, if another triangle would draw to that same pixel, but at a further depth, it is discarded (I think the alpha information from the original pixel(s) may no longer be available). In order for transparency to work properly in WebGL it is necessary to draw all triangles in depth order (furthest from the camera first). And even then if two triangles intersect there will still be problems.
In your case because you have many pixels that are completely transparent and others that are completely opaque there is another solution: a custom fragment shader that discards pixels if the texture alpha is below some threshold.
const vert = `
uniform mat4 uModelViewMatrix;
uniform mat4 uProjectionMatrix;
attribute vec3 aPosition;
attribute vec2 aTexCoord;
varying vec2 vTexCoord;
void main() {
vTexCoord = aTexCoord;
vec4 viewModelPosition = uModelViewMatrix * vec4(aPosition, 1.0);
gl_Position = uProjectionMatrix * viewModelPosition;
}`;
const frag = `
precision mediump float;
// ranges from 0..1
varying vec2 vTexCoord;
uniform sampler2D uSampler;
void main() {
vec4 tex = texture2D(uSampler, vTexCoord);
if (tex.a < 0.05) {
discard;
}
gl_FragColor = tex;
}`;
let r = 10
let a = 0
let c = 20
let angle = 0
let art
let discardShader;
function setup() {
createCanvas(windowWidth, windowHeight, WEBGL);
art = createGraphics(800, 800)
discardShader = createShader(vert, frag)
textureMode(NORMAL)
}
function draw() {
background(0);
let x = r + c * cos(a)
let y = r + c * sin(a)
art.fill(r, a, c)
art.ellipse(x + 400, y + 400, 10, 10)
c += 0.2
a += 1.8
push()
noStroke()
texture(art)
shader(discardShader)
rotateX(angle)
rotateY(angle)
rotateZ(angle)
box(400)
angle += 0.0003
pop()
orbitControl();
}
html,
body {
margin: 0;
overflow: hidden;
}
<script src="https://cdnjs.cloudflare.com/ajax/libs/p5.js/1.4.1/p5.js"></script>
Note #1: It is important that you use p5.js v1.4.1 for this to work because prior to that there was a bug that prevented user shaders from working with textures.
Note #2: If your texture had partial opacity then this would not work and instead you would want to render each plane of the box separately and in the correct order (farthest from the camera first).
I am first rotating a sprite which has a texture applied, then applying a filter with a fragment shader which causes distortion on the sprite. However, when I add the filter to the sprite, it rotates to normal horizontal position instead of the angled position it had before.
I have tried to apply a rotating function inside the shader to rotate the uv. This rotates the image but changes the image outside the parts that are rotated. Here are some screenshots.
Initial look of the sprite after adding and changing the angle:
How it looks after applying the filter:
As you can see the rotation is removed.
I tried to add a rotation matrix inside the shader, here is the result:
The rotation is correct, but only the texture is rotated and not the actual container.
Applying angle back to sprite does nothing.
The actual result should be first + second image, so that the filter applies on the rotated sprite.
Here is the code that adds the filter to the image:
const filter = new PIXI.Filter(null, getTransitionFragmentShader(transition, 2), uniforms);
filter.apply = function (filterManager, input, output, clear) {
var matrix = new PIXI.Matrix();
this.uniforms.mappedMatrix = filterManager.calculateNormalizedScreenSpaceMatrix(matrix);
PIXI.Filter.prototype.apply.call(this, filterManager, input, output, clear);
};
sprite.filters = [filter];
vec2 rotate(vec2 v, float a) {
float s = sin(a);
float c = cos(a);
mat2 m = mat2(c, -s, s, c);
return m * v;
}
vec4 transition (vec2 p) {
float dt = parabola(progress,1.);
float border = 1.;
vec2 newUV = rotate(p, angle);
vec4 color1 = vec4(0, 0, 0, 0);
if (fromNothing) {
color1 = vec4(0, 0, 0, 0);
} else {
color1 = texture2D(uTexture1, newUV);
}
vec4 color2 = texture2D(uTexture2, newUV);
vec4 d = texture2D(displacement,vec2(newUV.x*scaleX,newUV.y*scaleY));
float realnoise = 0.5*(cnoise(vec4(newUV.x*scaleX + 0.*time/3., newUV.y*scaleY,0.*time/3.,0.)) +1.);
float w = width*dt;
float maskvalue = smoothstep(1. - w,1.,p.x + mix(-w/2., 1. - w/2., progress));
float maskvalue0 = smoothstep(1.,1.,p.x + progress);
float mask = maskvalue + maskvalue*realnoise;
float final = smoothstep(border,border+0.01,mask);
return mix(color1, color2, final);
}
This is the shader code with ommitted functions for brevity.
Thanks!
What I did, was instead use a vertex shader for rotation as follows:
attribute vec2 aVertexPosition;
uniform mat3 projectionMatrix;
varying vec2 vTextureCoord;
uniform vec4 inputSize;
uniform vec4 outputFrame;
uniform vec2 rotation;
vec4 filterVertexPosition( void )
{
vec2 position = aVertexPosition * max(outputFrame.zw, vec2(0.)) + outputFrame.xy;
vec2 rotatedPosition = vec2(
position.x * rotation.y + position.y * rotation.x,
position.y * rotation.y - position.x * rotation.x
);
return vec4((projectionMatrix * vec3(rotatedPosition, 1.0)).xy, 0.0, 1.0);
}
vec2 filterTextureCoord( void )
{
return aVertexPosition * (outputFrame.zw * inputSize.zw);
}
void main(void)
{
gl_Position = filterVertexPosition();
vTextureCoord = filterTextureCoord();
}
Rotation is passed as pair of sine, cosine of angle [sine(radians), cosine(radians)].
When you create a sphere(Actually, It is also apolyhedron) or other polyhedron in WebGL native API, you will get a polyhedron with flat style, and you assign a texture to the polyhedron, It will look ugly with angle between two small face at the polyhedron suface. actually,you can subdivide the surface to get a smooth surface. and is there any other method to smooth the surface of the polyhedron.just look lile as the two picture as below.(the two picture is capture from the blender software)
Here is my code for generating the sphere
function getSphere(r,segment_lat,segment_lon){
var normalData = [];
var vertexData = [];
var textureCoord = [];
var vertexIndex = [],
for (var latNum = 0; latNum <= segment_lat; latNum++) {
var theta = latNum * Math.PI / segment_lat;
var sinTheta = Math.sin(theta);
var cosTheta = Math.cos(theta);
for (var lonNum = 0; lonNum <= segment_lon; lonNum++) {
var phi = lonNum * 2 * Math.PI / segment_lon;
var sinPhi = Math.sin(phi);
var cosPhi = Math.cos(phi);
var x = cosPhi * sinTheta;
var y = cosTheta;
var z = sinPhi * sinTheta;
var u = 1 - (lonNum / segment_lon);
var v = 1 - (latNum / segment_lat);
textureCoord.push(u);
textureCoord.push(v);
vertexData.push(r * x);
vertexData.push(r * y);
vertexData.push(r * z);
}
}
for (var latNum=0; latNum < segment_lat;latNum++) {
for (var lonNum=0; lonNum < segment_lon; lonNum++) {
var first = (latNum * (segment_lon + 1)) + lonNum;
var second = first + segment_lon + 1;
vertexIndex .push(first);
vertexIndex .push(second);
vertexIndex .push(first + 1);
vertexIndex .push(second);
vertexIndex .push(second + 1);
vertexIndex .push(first + 1);
}
}
return {'vertexData':vertexData,'vertexIndex':vertexIndex,'textureCoord':textureCoord,'normalDatas':normalData};
},
Fragment Shader:
precision mediump float;
varying vec2 vTextureCoord;
uniform sampler2D uSampler;
void main(void) {
vec3 light = vec3(1,1,1);
vec4 textureColor = texture2D(uSampler, vec2(vTextureCoord.s, vTextureCoord.t));
gl_FragColor = vec4(textureColor.rgb*light,textureColor.a);
// gl_FragColor = vec4 (1,0,0,.8);
}
Vertex Shader:
attribute vec2 aTextureCoord;
attribute vec3 aVertexPosition;
// uniform mediump mat4 proj_inv;
uniform mediump mat4 modelViewMatrix;
uniform mediump mat4 projectMatrix;
varying highp vec2 vTextureCoord;
void main(void) {
//projectMatrix multi modelViewMatrix must be in vertex shader,or it will be wrong;
gl_Position = projectMatrix*modelViewMatrix*vec4(aVertexPosition, 1.0);
vTextureCoord = aTextureCoord;
}
If I have to guess your rendered result is different than the picture you showed. What you see is a "flat" sphere in one uniform color and you want a shaded sphere, is that correct?
If so, you need to go read tutorials on how lighting works. Basically, the angle between the viewing vector and the fragment's normal is used to determined the brightness of each fragment. A fragment on the sphere that you are staring at directly have a very small angle between the view vector and its normal and thus its bright. A fragment on the barely visible edge on the sphere have a large angle between normal and view and thus it appears dark.
In your sphere generation code, you need to calculate the normals as well and pass that information to the gpu along with the rest. Fortunately for a sphere, normal is easy to calculate: normal = normalize(position - center); or just normalize(position) if center is assumed to be at (0,0,0).
I'm trying to rewrite my canvas-based rendering for my 2d game engine. I've made good progress and can render textures to the webgl context fine, complete with scaling, rotation and blending. But my performance sucks. On my test laptop, I can get 30 fps in vanilla 2d canvas with 1,000 entities on screen at once; in WebGL, I get 30 fps with 500 entities on screen. I'd expect the situation to be reverse!
I have a sneaking suspicion that the culprit is all this Float32Array buffer garbage I'm tossing around. Here's my render code:
// boilerplate code and obj coordinates
// grab gl context
var canvas = sys.canvas;
var gl = sys.webgl;
var program = sys.glProgram;
// width and height
var scale = sys.scale;
var tileWidthScaled = Math.floor(tileWidth * scale);
var tileHeightScaled = Math.floor(tileHeight * scale);
var normalizedWidth = tileWidthScaled / this.width;
var normalizedHeight = tileHeightScaled / this.height;
var worldX = targetX * scale;
var worldY = targetY * scale;
this.bindGLBuffer(gl, this.vertexBuffer, sys.glWorldLocation);
this.bufferGLRectangle(gl, worldX, worldY, tileWidthScaled, tileHeightScaled);
gl.activeTexture(gl.TEXTURE0);
gl.bindTexture(gl.TEXTURE_2D, this.texture);
var frameX = (Math.floor(tile * tileWidth) % this.width) * scale;
var frameY = (Math.floor(tile * tileWidth / this.width) * tileHeight) * scale;
// fragment (texture) shader
this.bindGLBuffer(gl, this.textureBuffer, sys.glTextureLocation);
this.bufferGLRectangle(gl, frameX, frameY, normalizedWidth, normalizedHeight);
gl.drawArrays(gl.TRIANGLES, 0, 6);
bufferGLRectangle: function (gl, x, y, width, height) {
var left = x;
var right = left + width;
var top = y;
var bottom = top + height;
gl.bufferData(gl.ARRAY_BUFFER, new Float32Array([
left, top,
right, top,
left, bottom,
left, bottom,
right, top,
right, bottom
]), gl.STATIC_DRAW);
},
bindGLBuffer: function (gl, buffer, location) {
gl.bindBuffer(gl.ARRAY_BUFFER, buffer);
gl.vertexAttribPointer(location, 2, gl.FLOAT, false, 0, 0);
},
And here's my simple test shaders (these are missing blending, scaling & rotation):
// fragment (texture) shader
precision mediump float;
uniform sampler2D image;
varying vec2 texturePosition;
void main() {
gl_FragColor = texture2D(image, texturePosition);
}
// vertex shader
attribute vec2 worldPosition;
attribute vec2 vertexPosition;
uniform vec2 canvasResolution;
varying vec2 texturePosition;
void main() {
vec2 zeroToOne = worldPosition / canvasResolution;
vec2 zeroToTwo = zeroToOne * 2.0;
vec2 clipSpace = zeroToTwo - 1.0;
gl_Position = vec4(clipSpace * vec2(1, -1), 0, 1);
texturePosition = vertexPosition;
}
Any ideas on how to get better performance? Is there a way to batch my drawArrays? Is there a way to cut down on the buffer garbage?
Thanks!
There's two big issues I can see here that will adversely affect your performance.
You're creating a lot of temporary Float32Arrays, which are currently expensive to construct (That should get better in the future). It would be far better in this case to create a single array and set the vertices each time like so:
verts[0] = left; verts[1] = top;
verts[2] = right; verts[3] = top;
// etc...
gl.bufferData(gl.ARRAY_BUFFER, verts, gl.STATIC_DRAW);
The bigger issue by far, however, is that you're only drawing a single quad at a time. 3D APIs simply aren't designed to do this efficiently. What you want to do is try and squeeze as many triangles as possible into each drawArrays/drawElements call you make.
There's several ways to do that, the most straightforward being to fill up a buffer with as many quads as you can that share the same texture, then draw them all in one go. In psuedocode:
var MAX_QUADS_PER_BATCH = 100;
var VERTS_PER_QUAD = 6;
var FLOATS_PER_VERT = 2;
var verts = new Float32Array(MAX_QUADS_PER_BATCH * VERTS_PER_QUAD * FLOATS_PER_VERT);
var quadCount = 0;
function addQuad(left, top, bottom, right) {
var offset = quadCount * VERTS_PER_QUAD * FLOATS_PER_VERT;
verts[offset] = left; verts[offset+1] = top;
verts[offset+2] = right; verts[offset+3] = top;
// etc...
quadCount++;
if(quadCount == MAX_QUADS_PER_BATCH) {
flushQuads();
}
}
function flushQuads() {
gl.bindBuffer(gl.ARRAY_BUFFER, vertsBuffer);
gl.bufferData(gl.ARRAY_BUFFER, verts, gl.STATIC_DRAW); // Copy the buffer we've been building to the GPU.
// Make sure vertexAttribPointers are set, etc...
gl.drawArrays(gl.TRIANGLES, 0, quadCount + VERTS_PER_QUAD);
}
// In your render loop
for(sprite in spriteTypes) {
gl.bindTexture(gl.TEXTURE_2D, sprite.texture);
for(instance in sprite.instances) {
addQuad(instance.left, instance.top, instance.right, instance.bottom);
}
flushQuads();
}
That's an oversimplification, and there's ways to batch even more, but hopefully that gives you an idea of how to start batching your calls for better performance.
If you use WebGL Inspector you'll see in the trace if you do any unnecessary GL instructions (they're marked with bright yellow background). This might give you an idea on how to optimize your rendering.
Generally speaking, sort your draw calls so all using the same program, then attributes, then textures and then uniforms are done in order. This way you'll have as few GL instructions (and JS instructions) as possible.