How to convert NURBS control points into 3D triangles? - javascript

THREE.js provides high-level 3D graphics, however it does not support 3D curves; it only supports a 2D SplineCurve.
Luckly there is a NURBS manipulator available at npmjs.com, supporting any dimension degree. I think what I need from nurbs is the method NURBS(controlPoints).evaluate().
However I'm not understanding what arguments are expected to this evaluate() method. I want to convert NURBS control points into something THREE.js understands, such as triangles.
import NURBS from 'nurbs';
let curve = NURBS({
points: [[-1, 0, 0], [-0.5, 0.5, 0.5], [0.5, -0.5, -0.5], [1, 0, 0]],
degree: 3,
});
let r = [0, 0, 0];
curve.evaluate(r, 0, 0, 0);
console.log(r); // [ 4.083333333333332, -19.249999999999996, -19.249999999999996 ]
curve.evaluate(r, .1, .1, .1);
console.log(r); // [ 3.5756666666666677, -17.527, -17.527 ]

Related

Can't create a 3D Plane in XZ Plane

I use Aframe library, when creating a plane, I have used these points
point_1 = [0, 0, 0];
point_2 = [1, 0, 0];
point_3 = [1, 1, 0];
point_4 = [0, 1, 0];
to make a plane in XY plane, It works, code: https://jsfiddle.net/qtv10291/yp4mx6re/8/
But when I change point to:
point_1 = [0, 0, 0];
point_2 = [1, 0, 0];
point_3 = [1, 0, 1];
point_4 = [0, 0, 1];
to create a plane in XZ plane, It doesn't work and error THREE.DirectGeometry: Faceless geometries are not supported., code: https://jsfiddle.net/qtv10291/49gvwL8a/
Shape is only 2D and only accept (x, y) points, The third axis of the points you passed in are ignored.
If you want to create an XZ plane, either rotate the XY plane 90 degrees:
const geometry = new THREE.ShapeGeometry( polygon );
geometry.rotateX(-Math.PI * 0.5);
Or create it via vertices and faces:
const geometry = new THREE.Geometry();
geometry.vertices = points;
geometry.faces.push( new THREE.Face3( 2, 1, 0 ), new THREE.Face3( 2, 0, 3 ) );

Why is that gl.drawElements needs rebind while gl.drawArrays doesn't?

Hi guys I been studying webgl these days.
There are two snippets that accomplish the same thing - draw a square. One is using gl.drawArrays for 6 vertices and one is using gl.drawElements for 4 vertices.
However I noticed that when using gl.drawArrays, we can unbind gl.ARRAY_BUFFER before using it, it doesn't matter. See the snippets.
function initBuffers() {
/*
V0 V3
(-0.5, 0.5, 0) (0.5, 0.5, 0)
X---------------------X
| |
| |
| (0, 0) |
| |
| |
X---------------------X
V1 V2
(-0.5, -0.5, 0) (0.5, -0.5, 0)
*/
const vertices = [
// first triangle (V0, V1, V2)
-0.5, 0.5, 0,
-0.5, -0.5, 0,
0.5, -0.5, 0,
// second triangle (V0, V2, V3)
-0.5, 0.5, 0,
0.5, -0.5, 0,
0.5, 0.5, 0
];
// Setting up the VBO
squareVertexBuffer = gl.createBuffer();
gl.bindBuffer(gl.ARRAY_BUFFER, squareVertexBuffer);
gl.bufferData(gl.ARRAY_BUFFER, new Float32Array(vertices), gl.STATIC_DRAW);
gl.vertexAttribPointer(program.aVertexPosition, 3, gl.FLOAT, false, 0, 0);
gl.enableVertexAttribArray(program.aVertexPosition);
// Clean
gl.bindBuffer(gl.ARRAY_BUFFER, null);
}
function draw() {
// Clear the scene
gl.clear(gl.COLOR_BUFFER_BIT | gl.DEPTH_BUFFER_BIT);
gl.viewport(0, 0, gl.canvas.width, gl.canvas.height);
gl.drawArrays(gl.TRIANGLES, 0, 6);
// Clean
gl.bindBuffer(gl.ARRAY_BUFFER, null);
}
initBuffers() is called before draw(). Notice here I already unbind gl.ARRAY_BUFFER before calling gl.drawArrays and it successfully draws the square.
However when using gl.drawElements, I have to make sure gl.ELEMENT_ARRAY_BUFFER is currently binded to the correct indices. e.g.
gl.bindBuffer(gl.ELEMENT_ARRAY_BUFFER, squareIndexBuffer);
gl.drawElements(gl.TRIANGLES, indices.length, gl.UNSIGNED_SHORT, 0);
if I use gl.bindBuffer(gl.ELEMENT_ARRAY_BUFFER, null); like I did for gl.drawArrays, I have to rebind it using gl.bindBuffer(gl.ELEMENT_ARRAY_BUFFER, squareIndexBuffer); before calling gl.drawElements.
This is mostly explained in this answer: https://stackoverflow.com/a/27164577/128511
The short version is gl.drawArrays uses only attributes. Attributes have buffers bound to them when you call gl.vertexAttribPointer. Whatever buffer was bound do gl.ARRAY_BUFFER at the time you call gl.verexAttribPointer is copied into the state of that attribute.
Attributes themselves are state of the current Vertex Array Object (VAO) as is the current ELEMENT_ARRAY_BUFFER. VAOs are an optional extension in WebGL1 and a standard part of WebGL2.
Again refer to this answer: https://stackoverflow.com/a/27164577/128511 and also this answer: https://stackoverflow.com/a/50257695/128511

PIL and javascript getImageData

I'm trying to modify pixels with Python (PIL) and read the pixels again with javascript
In python I did this,
from PIL import Image
im = Image.open("foo.png").convert('RGBA')
pix = im.load()
print im.mode
for x in range(0, 10):
for y in range(0, 10):
pix[x, y] = (50, x, y, 100)
im.save("foo_new.png")
In javascript I read images like this,
<img src="foo_new.png" id="myImage">
<br>
<canvas id="myCanvas" width="240" height="297" style="border:1px solid #d3d3d3;">
Your browser does not support the HTML5 canvas tag.
</canvas>
<script>
window.onload = function() {
var img = document.getElementById("myImage");
var c = document.getElementById("myCanvas");
var ctx = c.getContext("2d");
ctx.drawImage(img, 0, 0);
console.log(ctx.getImageData(0, 0, 10, 10).data)
}
</script>
In chrome's console I expect to see the following sequence,
[50, 0, 0, 100]
[50, 0, 1, 100]
[50, 0, 2, 100]
...
However, I'm getting this
[51, 0, 0, 100],
[51, 0, 0, 100],
[51, 3, 0, 100],
[51, 3, 0, 100]
Totally unexpected ...
But the RGB mode works just fine, I don't really get it, anyone knows?

webgl, texture coordinates and obj

I'm finding it difficult to understand the correlation between vertex and texture coordinates when the data is rendered. I have a cube being drawn using drawElements form data parsed from an obj. I got textures somewhere close to working with a simple plane where the number of vertex for position and for texture coordinates but once i use a more complex model or even just a more complex uv unwrap i end up with the texture going all wrong.
From what i've read there doesn't seen to be a way of using texture coordinate indices the same way you would for vertex position, which is unfortunate because the obj has that information. The way i've gotten it close to working was by building an array of texture coordinates from the indices data in the obj. But because the length of the vertex and texture coordinate arrays differ (for example in an obj for a cube there are 8 vertex and up to 36 texture coordinate depending on have the mesh is unwrapped) they don't correlate.
What is the correct workflow for using drawElements and mapping the vertex to its correct texture coordinates.
You are correct, you can not easily use different indices for different attributes (in your case positions and texture coordinates).
A common example is a cube. If you want to render a cube with lighting you need normals. There are only 8 positions on a cube but each face of the cube needs 3 different normals for the same positions, one normal for each face that shares that position. That means you need 24 vertices total, 4 for each of the 6 faces of the cube.
If you have a file format that has separate indices for different attributes you'll need to expand them out so that each unique combination of attributes (position, normal, texture coord, etc..) is in your buffers.
Most game engines would do this kind of thing offline. In other words, they'd write some tool that reads the OBJ file, expands the various attributes, and then writes the data back out pre-expanded. That's because generating the expanded data can be time consuming at runtime for a large model if you're trying to optimize the data and only keep unique vertices.
If you don't care about optimal data then just expand based on the indices. The number of indices for each type of attribute should be the same.
Note: positions are not special. I bring this up because you said there doesn't seen to be a way of using texture coordinate indices the same way you would for vertex position. WebGL has no concept of "positions". It just has attributes which describe how to pull data out of buffers. What's in those attributes (positions, normals, random data, whatever), is up to you. gl.drawElements indexes the entire combination of attributes you supply. If you pass in an index of 7 it's going to give you element 7 of each attribute.
Note that the above is describing how pretty much all 3d engines written in WebGL work. That said you can get creative if you really want to.
Here's a program that stores positions and normals in textures. It then puts the indices in buffers. Because textures are random access it can therefore have different indices for positions and normals
var canvas = document.getElementById("c");
var gl = canvas.getContext("webgl");
var ext = gl.getExtension("OES_texture_float");
if (!ext) {
alert("need OES_texture_float extension cause I'm lazy");
//return;
}
if (gl.getParameter(gl.MAX_VERTEX_TEXTURE_IMAGE_UNITS) < 2) {
alert("need to be able to access textures from vertex shaders");
//return;
}
var m4 = twgl.m4;
var v3 = twgl.v3;
var programInfo = twgl.createProgramInfo(gl, ["vshader", "fshader"]);
// Cube data
var positions = [
-1, -1, -1, // 0 lbb
+1, -1, -1, // 1 rbb 2---3
-1, +1, -1, // 2 ltb /| /|
+1, +1, -1, // 3 rtb 6---7 |
-1, -1, +1, // 4 lbf | | | |
+1, -1, +1, // 5 rbf | 0-|-1
-1, +1, +1, // 6 ltf |/ |/
+1, +1, +1, // 7 rtf 4---5
];
var positionIndices = [
3, 7, 5, 3, 5, 1, // right
6, 2, 0, 6, 0, 4, // left
6, 7, 3, 6, 3, 2, // top
0, 1, 5, 0, 5, 4, // bottom
7, 6, 4, 7, 4, 5, // front
2, 3, 1, 2, 1, 0, // back
];
var normals = [
+1, 0, 0,
-1, 0, 0,
0, +1, 0,
0, -1, 0,
0, 0, +1,
0, 0, -1,
]
var normalIndices = [
0, 0, 0, 0, 0, 0, // right
1, 1, 1, 1, 1, 1, // left
2, 2, 2, 2, 2, 2, // top
3, 3, 3, 3, 3, 3, // bottom
4, 4, 4, 4, 4, 4, // front
5, 5, 5, 5, 5, 5, // back
];
function degToRad(deg) {
return deg * Math.PI / 180;
}
var bufferInfo = twgl.createBufferInfoFromArrays(gl, {
a_positionIndex: { size: 1, data: positionIndices },
a_normalIndex: { size: 1, data: normalIndices, },
});
var textures = twgl.createTextures(gl, {
positions: {
format: gl.RGB,
type: gl.FLOAT,
height: 1,
src: positions,
min: gl.NEAREST,
mag: gl.NEAREST,
wrap: gl.CLAMP_TO_EDGE,
},
normals: {
format: gl.RGB,
type: gl.FLOAT,
height: 1,
src: normals,
min: gl.NEAREST,
mag: gl.NEAREST,
wrap: gl.CLAMP_TO_EDGE,
},
});
var xRot = degToRad(30);
var yRot = degToRad(20);
var lightDir = v3.normalize([-0.2, -0.1, 0.5]);
function draw(time) {
time *= 0.001; // convert to seconds
twgl.resizeCanvasToDisplaySize(gl.canvas);
gl.viewport(0, 0, gl.canvas.width, gl.canvas.height);
yRot = time;
gl.enable(gl.DEPTH_TEST);
gl.enable(gl.CULL_FACE);
gl.useProgram(programInfo.program);
var persp = m4.perspective(
degToRad(45),
gl.canvas.clientWidth / gl.canvas.clientHeight,
0.1, 100.0);
var mat = m4.identity();
mat = m4.translate(mat, [0.0, 0.0, -5.0]);
mat = m4.rotateX(mat, xRot);
mat = m4.rotateY(mat, yRot);
var uniforms = {
u_positions: textures.positions,
u_positionsSize: [positions.length / 3, 1],
u_normals: textures.normals,
u_normalsSize: [normals.length / 3, 1],
u_mvpMatrix: m4.multiply(persp, mat),
u_mvMatrix: mat,
u_color: [0.5, 0.8, 1, 1],
u_lightDirection: lightDir,
};
twgl.setBuffersAndAttributes(gl, programInfo, bufferInfo);
twgl.setUniforms(programInfo, uniforms);
twgl.drawBufferInfo(gl, bufferInfo);
requestAnimationFrame(draw);
}
requestAnimationFrame(draw);
body { margin: 0; }
canvas { width: 100vw; height: 100vh; display: block; }
<script src="//twgljs.org/dist/2.x/twgl-full.min.js"></script>
<script id="vshader" type="whatever">
attribute float a_positionIndex;
attribute float a_normalIndex;
attribute vec4 a_pos;
uniform sampler2D u_positions;
uniform vec2 u_positionsSize;
uniform sampler2D u_normals;
uniform vec2 u_normalsSize;
uniform mat4 u_mvpMatrix;
uniform mat4 u_mvMatrix;
varying vec3 v_normal;
// to index the value in the texture we need to
// compute a texture coordinate that will access
// the correct texel. To do that we need access from
// the middle of the first texel to the middle of the
// last texel.
//
// In other words if we had 3 values (and therefore
// 3 texels) we'd have something like this
//
// ------3x1 ----- texels ----------
// [ ][ ][ ]
// 0.0 |<----------------------------->| 1.0
//
// If we just did index / numValues we'd get
//
// [ ][ ][ ]
// | | |
// 0.0 0.333 0.666
//
// Which is right between texels so we add a
// a halfTexel to get this
//
// [ ][ ][ ]
// | | |
// 0.167 0.5 0.833
// note: In WebGL2 we could just use `textureFetch`
// which takes integer pixel locations
vec2 texCoordFromIndex(const float index, const vec2 textureSize) {
vec2 colRow = vec2(
mod(index, textureSize.x), // columm
floor(index / textureSize.x)); // row
return vec2((colRow + 0.5) / textureSize);
}
void main() {
vec2 ptc = texCoordFromIndex(a_positionIndex, u_positionsSize);
vec3 position = texture2D(u_positions, ptc).rgb;
vec2 ntc = texCoordFromIndex(a_normalIndex, u_normalsSize);
vec3 normal = texture2D(u_normals, ntc).rgb;
gl_Position = u_mvpMatrix * vec4(position, 1);
v_normal = (u_mvMatrix * vec4(normal, 0)).xyz;
}
</script>
<script id="fshader" type="whatever">
precision mediump float;
uniform vec4 u_color;
uniform vec3 u_lightDirection;
varying vec3 v_normal;
void main() {
float light = dot(
normalize(v_normal), u_lightDirection) * 0.5 + 0.5;
gl_FragColor = vec4(u_color.rgb * light, u_color.a);
}
</script>
<canvas id="c"></canvas>

How to make a decahedron

Hi I've found some code that animates 3d shapes and even gives an example of making and animating an icosahedron I'm trying to turn it in to a decahedron though and my geometry is pretty bad. The code I have for the icosahedron is:
// draw a icosahedron
var tau = 1.6180,
phi = 20.90515745, // (180-138.1896851)/2
rt3 = Math.sqrt(3),
d = sideLen/2,
foldTbl = [ 60, -60, 60, -60,
-60, -60, 60, 60,
60, -60, 60, -60,
-60, -60, 60, 60,
60, -60, 60, -60],
moveTbl = [ 0, 2*d, 0, 2*d,
2*d, 2*d, 0, 0,
0, 2*d, 0, 2*d,
2*d, 2*d, 0, 0,
0, 2*d, 0, 2*d],
triangle = ['M',0,0,0, 'L', d*rt3,d,0, 0,2*d,0, 'z'],
tri,
faces = g.createGroup3D(),
bend = -2*phi,
i;
for (i=0; i<20; i++)
{
// create the next face
tri = g.compileShape3D(triangle, "red", null, 1); // backColor irrelevant
faces.addObj(tri);
faces.translate(0, -moveTbl[i], 0);
faces.rotate(0, 0, 1, foldTbl[i]);
faces.rotate(0, 1, 0, bend);
faces.translate(0, moveTbl[i], 0);
}
return faces;
i'm sure there must be an easy way to make this a decahedron but if anyone has any advice that'd be amazing - thanks!
If you have coordinates for an icosahedron but want to draw a dodecahedron, you can make use of the duality between those two. Take the icosahedron, and put a new vertex in the middle of every one of its triangular faces. Connect two new vertices with an edge if the corresponding faces of the icosahedron had an edge in common. You will obtain a dodecahedron, with one vertex for every face of the icosahedron, and one face for every vertex.

Categories