I was wondering, as the rgba is not supported in three.js (the alpha is not used), is there a way to make a face with an opacity gradient?
I saw it's probably possible with a ShaderMaterial, using custom attributes, but as I'm new in WebGL, I don't really understand yet.
attributes = {
// ...
customColor: { type: 'v4', value: [] }
// ...
};
var values_color = attributes.customColor.value;
for( var v = 0; v < vertices.length; v++ ) {
// ...
values_color[ v ] = new THREE.Vector4();
// ...
}
I would like to do something like this, but with transparency: http://jsfiddle.net/FtML5/3/
You can use THREE.ShaderMaterial with a custom vertex attribute for the alpha value. Here is a step by step guide -
1) In you vertex shader, declare a attribute float which will take the alpha value. Also declare a varying float in both vertex and fragment shader.
Vertex shader:
attribute float alphaValue;
varying float vAlphaValue;
Fragment shader:
varying float vAlphaValue;
2) Assign the alpha attribute value to the varying value in vertex shader.
Vertex shader:
vAlphaValue = alphaValue;
3) After all the calculation has been done, assign the alpha varying value to the alpha value of gl_FragColor.
Fragment shader:
gl_FragColor.a = vAlphaValue;
4) From host side, add an array with the length of total vertex. Here is the code sample -
var geometry = new THREE.BufferGeometry();
geometry.addAttribute('position', new THREE.BufferAttribute(vertices, 3));
var alphaArray = [];
var alphaArrayLength = vertices.length / 3;
for(var i = 0; i < alphaArrayLength; i++) {
alphaArray.push(0.5);
}
5) Add a custom attribute for alpha value in the geometry and update it with the created array -
geometry.addAttribute('alphaValue', new THREE.BufferAttribute(new Float32Array(alphaArray), 1));
6) Create a THREE.ShaderMaterial -
var material = new THREE.ShaderMaterial({
vertexColors: THREE.VertexColors,
side: THREE.DoubleSide,
transparent: true,
vertexShader: document.
getElementById('vertex_shader_for_face').text,
fragmentShader: document.
getElementById('fragment_shader_for_face').text
});
7) Create the mesh with the geometry and material -
var mesh = new THREE.Mesh(geometry, material);
The quickest solution seems like using a custom shader and setting fragment opacity based on UV values.
Related
I would like to create a texture in code consisting of an array of RGBA color values and use those values to determine the colors of tiles that I'm generating in a fragment shader. I got the idea, and much of the code to do this from the top solution provided to this SO question: Index expression must be constant - WebGL/GLSL error
However, if I create the texture using the height and width that correspond to my color array, I don't see anything render to the canvas. If I hardcode different values, I sometimes get an image, but that image doesn't place the tile colors in the desired positions, of course, and they move around as I change my viewPos variables.
From trial and error testing with a handful of handpicked values, it seems that I MIGHT only be getting an image when gl.texImage2D() receives a height and a width equal to a power of 2, though I don't see anything about this in documentation. 32 was the largest width I could produce an image with, and 16 was the largest height I could produce an image with. 1, 2, 4, and 8 also work. (the texture size should be 27 by 20 for the window size I'm testing with)
Note that the fragment shader still receives the uTileColorSampSize vector that relates to the size of the color array. I only need the gl.texImage2D() width and height values to be hardcoded to produce an image. In fact, every value i've tried for the uniform has produced an image, though each with different tile color patterns.
I've included a slightly simplified version of my Gfx class (the original is kinda messy, and includes a lot of stuff not relevant to this issue) below. I'd imagine the problem is above like 186 or so, but I've included a few additional functions below that in case those happen to be relevant.
class Gfx {
constructor() {
this.canvas = document.getElementById("canvas");
this.gl = canvas.getContext("webgl");
//viewPos changes as you drag your cursor across the canvas
this.x_viewPos = 0;
this.y_viewPos = 0;
}
init() {
this.resizeCanvas(window.innerWidth, window.innerHeight);
const vsSource = `
attribute vec4 aVertPos;
uniform mat4 uMVMat;
uniform mat4 uProjMat;
void main() {
gl_Position = uProjMat * uMVMat * aVertPos;
}
`;
//my tiles get drawn in the frag shader below
const fsSource = `
precision mediump float;
uniform vec2 uViewPos;
uniform vec2 uTileColorSampSize;
uniform sampler2D uTileColorSamp;
void main() {
//tile width and height are both 33px including a 1px border
const float lineThickness = (1.0/33.0);
//gridMult components will either be 0.0 or 1.0. This is used to place the grid lines
vec2 gridMult = vec2(
ceil(max(0.0, fract((gl_FragCoord.x-uViewPos.x)/33.0) - lineThickness)),
ceil(max(0.0, fract((gl_FragCoord.y-uViewPos.y)/33.0) - lineThickness))
);
//tileIndex is used to pull color data from the sampler texture
//add 0.5 due to pixel coords being off in gl
vec2 tileIndex = vec2(
floor((gl_FragCoord.x-uViewPos.x)/33.0) + 0.5,
floor((gl_FragCoord.y-uViewPos.y)/33.0) + 0.5
);
//divide by samp size as tex coords are 0.0 to 1.0
vec4 tileColor = texture2D(uTileColorSamp, vec2(
tileIndex.x/uTileColorSampSize.x,
tileIndex.y/uTileColorSampSize.y
));
gl_FragColor = vec4(
tileColor.x * gridMult.x * gridMult.y,
tileColor.y * gridMult.x * gridMult.y,
tileColor.z * gridMult.x * gridMult.y,
1.0 //the 4th rgba in our sampler is always 1.0 anyway
);
}
`;
const shader = this.buildShader(vsSource, fsSource);
this.programInfo = {
program: shader,
attribLocs: {
vertexPosition: this.gl.getAttribLocation(shader, 'aVertPos')
},
uniformLocs: {
projMat: this.gl.getUniformLocation(shader, 'uProjMat'),
MVMat: this.gl.getUniformLocation(shader, 'uMVMat'),
viewPos: this.gl.getUniformLocation(shader, 'uViewPos'),
tileColorSamp: this.gl.getUniformLocation(shader, 'uTileColorSamp'),
tileColorSampSize: this.gl.getUniformLocation(shader, 'uTileColorSampSize')
}
};
const buffers = this.initBuffers();
//check and enable OES_texture_float to allow us to create our sampler tex
if (!this.gl.getExtension("OES_texture_float")) {
alert("Sorry, your browser/GPU/driver doesn't support floating point textures");
}
this.gl.clearColor(0.0, 0.0, 0.15, 1.0);
this.gl.clearDepth(1.0);
this.gl.enable(this.gl.DEPTH_TEST);
this.gl.depthFunc(this.gl.LEQUAL);
const FOV = 45 * Math.PI / 180; // in radians
const aspect = this.gl.canvas.width / this.gl.canvas.height;
this.projMat = glMatrix.mat4.create();
glMatrix.mat4.perspective(this.projMat, FOV, aspect, 0.0, 100.0);
this.MVMat = glMatrix.mat4.create();
glMatrix.mat4.translate(this.MVMat, this.MVMat, [-0.0, -0.0, -1.0]);
this.gl.bindBuffer(this.gl.ARRAY_BUFFER, buffers.position);
this.gl.vertexAttribPointer(this.programInfo.attribLocs.vertPos, 2, this.gl.FLOAT, false, 0, 0);
this.gl.enableVertexAttribArray(this.programInfo.attribLocs.vertPos);
this.glDraw();
}
//glDraw() gets called once above, as well as in every frame of my render loop
//(not included here as I have it in a seperate Timing class)
glDraw() {
this.gl.clear(this.gl.COLOR_BUFFER_BIT | this.gl.DEPTH_BUFFER_BIT);
this.gl.useProgram(this.programInfo.program);
//X and Y TILE_COUNTs varrified to correspond to colorArray size in testing
//(colorArray.length = rgbaLength * X_TILE_COUNT * Y_TILE_COUNT)
//(colorArray.length = rgbaLength * widthInTiles * heightInTiles)
//(colorArray.length = 4 * 27 * 20)
let x_tileColorSampSize = X_TILE_COUNT;
let y_tileColorSampSize = Y_TILE_COUNT;
//getTileColorArray() produces a flat array of floats between 0.0and 1.0
//equal in length to rgbaLength * X_TILE_COUNT * Y_TILE_COUNT
//every 4th value is 1.0, representing tile alpha
let colorArray = this.getTileColorArray();
let colorTex = this.colorMapTexFromArray(
x_tileColorSampSize,
y_tileColorSampSize,
colorArray
);
//SO solution said to use anyting between 0 and 15 for texUnit, they used 3
//I imagine this is just an arbitrary location in memory to hold a texture
let texUnit = 3;
this.gl.activeTexture(this.gl.TEXTURE0 + texUnit);
this.gl.bindTexture(this.gl.TEXTURE_2D, colorTex);
this.gl.uniform1i(
this.programInfo.uniformLocs.tileColorSamp,
texUnit
);
this.gl.uniform2fv(
this.programInfo.uniformLocs.tileColorSampSize,
[x_tileColorSampSize, y_tileColorSampSize]
);
this.gl.uniform2fv(
this.programInfo.uniformLocs.viewPos,
[-this.x_viewPos, this.y_viewPos] //these change as you drag your cursor across the canvas
);
this.gl.uniformMatrix4fv(
this.programInfo.uniformLocs.projMat,
false,
this.projMat
);
this.gl.uniformMatrix4fv(
this.programInfo.uniformLocs.MVMat,
false,
this.MVMat
);
this.gl.drawArrays(this.gl.TRIANGLE_STRIP, 0, 4);
}
colorMapTexFromArray(width, height, colorArray) {
let float32Arr = Float32Array.from(colorArray);
let oldActive = this.gl.getParameter(this.gl.ACTIVE_TEXTURE);
//SO solution said "working register 31, thanks", next to next line
//not sure what that means but I think they're just looking for any
//arbitrary place to store the texture?
this.gl.activeTexture(this.gl.TEXTURE15);
var texture = this.gl.createTexture();
this.gl.bindTexture(this.gl.TEXTURE_2D, texture);
this.gl.texImage2D(
this.gl.TEXTURE_2D, 0, this.gl.RGBA,
//if I replace width and height with certain magic numbers
//like 4 or 8 (all the way up to 32 for width and 16 for height)
//I will see colored tiles, though obviously they don't map correctly.
//I THINK I've only seen it work with a widths and heights that are
//a power of 2... could the issue be that I need my texture to have
//width and height equal to a power of 2?
width, height, 0,
this.gl.RGBA, this.gl.FLOAT, float32Arr
);
//use gl.NEAREST to prevent gl from blurring texture
this.gl.texParameteri(this.gl.TEXTURE_2D, this.gl.TEXTURE_MAG_FILTER, this.gl.NEAREST);
this.gl.texParameteri(this.gl.TEXTURE_2D, this.gl.TEXTURE_MIN_FILTER, this.gl.NEAREST);
this.gl.bindTexture(this.gl.TEXTURE_2D, null);
this.gl.activeTexture(oldActive);
return texture;
}
//I don't think the issue would be in the functions below, but I included them anyway
resizeCanvas(baseWidth, baseHeight) {
let widthMod = 0;
let heightMod = 0;
//...some math is done here to account for some DOM elements that consume window space...
this.canvas.width = baseWidth + widthMod;
this.canvas.height = baseHeight + heightMod;
this.gl.viewport(0, 0, this.gl.canvas.width, this.gl.canvas.height);
}
initBuffers() {
const posBuff = this.gl.createBuffer();
this.gl.bindBuffer(this.gl.ARRAY_BUFFER, posBuff);
const positions = [
-1.0, 1.0,
1.0, 1.0,
-1.0, -1.0,
1.0, -1.0,
];
this.gl.bufferData(
this.gl.ARRAY_BUFFER,
new Float32Array(positions),
this.gl.STATIC_DRAW
);
return {
position: posBuff
};
}
buildShader(vsSource, fsSource) {
const vertShader = this.loadShader(this.gl.VERTEX_SHADER, vsSource);
const fragShader = this.loadShader(this.gl.FRAGMENT_SHADER, fsSource);
const shaderProg = this.gl.createProgram();
this.gl.attachShader(shaderProg, vertShader);
this.gl.attachShader(shaderProg, fragShader);
this.gl.linkProgram(shaderProg);
if (!this.gl.getProgramParameter(shaderProg, this.gl.LINK_STATUS)) {
console.error('Unable to initialize the shader program: ' + gl.getProgramInfoLog(shaderProg));
return null;
}
return shaderProg;
}
loadShader(type, source) {
const shader = this.gl.createShader(type);
this.gl.shaderSource(shader, source);
this.gl.compileShader(shader);
if (!this.gl.getShaderParameter(shader, this.gl.COMPILE_STATUS)) {
console.error('An error occurred compiling the shaders: ' + this.gl.getShaderInfoLog(shader));
this.gl.deleteShader(shader);
return null;
}
return shader;
}
//getTileColorArray as it appears in my code, in case you want to take a peak.
//every tileGrid[i][j] has a color, which is an array of 4 values between 0.0 and 1.0
//the fourth (last) value in tileGrid[i][j].color is always 1.0
getTileColorArray() {
let i_min = Math.max(0, Math.floor(this.x_pxPosToTilePos(this.x_viewPos)));
let i_max = Math.min(GLOBAL.map.worldWidth-1, i_min + Math.ceil(this.x_pxPosToTilePos(this.canvas.width)) + 1);
let j_min = Math.max(0, Math.floor(this.y_pxPosToTilePos(this.y_viewPos)));
let j_max = Math.min(GLOBAL.map.worldHeight-1, j_min + Math.ceil(this.y_pxPosToTilePos(this.canvas.height)) + 1);
let colorArray = [];
for (let i=i_min; i <= i_max; i++) {
for (let j=j_min; j <= j_max; j++) {
colorArray = colorArray.concat(GLOBAL.map.tileGrid[i][j].color);
}
}
return colorArray;
}
}
I've also included a pastebin of my full unaltered Gfx class in case you would like to look at that as well: https://pastebin.com/f0erR9qG
And a pastebin of my simplified code for the line numbers: https://pastebin.com/iB1pUZJa
WebGL 1.0 does not support texture wrapping on textures with non-power of two dimensions. There are two ways to solve this issue, one is to buffer the texture with enough extra data to make it have power of two dimensions, and the other solution it to simply turn off texture wrapping, like so:
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_WRAP_S, gl.CLAMP_TO_EDGE);
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_WRAP_T, gl.CLAMP_TO_EDGE);
I'm still getting strange behavior in my frag shader, but its at least showing tiles now. I think the additional strange behavior is just a result of my shader algorithm not matching what I have envisioned.
I need some help with webgl.
I have to open the mouth of a face model (Lee Perry Smith) from code, but I don't know how to identify the correct vertexes to do it.
For my task I'm not allowed to use three.js.
I've tried to get the indexes from blender but I had no luck for some reason (it's like the identified vertexes in blender do not correspond to the son that I generated for webgl.
Does someone have any idea..?
More infos:
I've used this snippet in blender to get the indices: http://blenderscripting.blogspot.it/2011/07/getting-index-of-currently-selected.html
then went into my javascript and used this function to edit the vertexes coordinates (just to see if they were right, even though this is not the real transformation wanted):
function move_vertex(indices,x,y,z){
vertex = headObject.vertices[0];
indices.forEach(function(index){
vertex[3*index] += x;
vertex[3*index+1]+=y;
vertex[3*index+2]+=z;
});
gl.bindBuffer(gl.ARRAY_BUFFER,headObject.modelVertexBuffer[0]);
gl.bufferSubData(gl.ARRAY_BUFFER, 0, new Float32Array(vertex));
gl.bindBuffer(gl.ARRAY_BUFFER,null);
}
There are basically unlimited ways to do this . Which one fits your situation I have no idea.
One would be to use a skinning system. Attach the mouth vertices to bones and move the bones.
Another would be to use morph targets. Basically save the mesh once with mouth open and once with mouth closed. Load both meshes in webgl, pass both to your shader and lerp between them
attribute vec4 position1; // data from mouth closed model
attribute vec4 position2; // data from mouth open model
uniform float mixAmount;
uniform mat4 worldViewProjection;
...
// compute the position to use based on the mixAmount
// 0 = close mouth
// 1 = open mouth
// 0.5 = 50% between open and closed mouth etc..
vec4 position = mix(position1, position2, mixAmount);
// use the result in the standard way
gl_Position = worldViewProjection * position;
You'd do a similar mix for normals though you'd want to normalize the result.
Most modeling packages support using morph targets inside the package. It up to the file format and the exporter whether or not that data gets exported. The easy way to just hack something together would just be to export the face twice and load 2 files with the code you have.
Another might be to use vertex colors. In your modeling program color the lip vertices a distinct color then find those vertices by color in your code.
Another would be to assign the lips a different material then use the material to find the vertices.
Some 3d modeling programs let you add meta data to vertices. That's basically a variation of the vertex colors method. You'd probably need to write your own exporter as few 3rd party formats support extra data. Even if the format could theoretically support extra data most exporters don't export it.
Similarly some 3d modeling programs let you add vertices to selections/clusters/groups which you can then reference to find the lips. Again this method probably requires your own exporter as most format don't support this data
One other really hacky way but will get the job done in a pinch. Select the lip vertices and move them 1000 units to the right. Then in your program you can find all the vertices too far to the right and subtract 1000 units from each one to put them back where they originally would have been. This might mess up your normals but you can recompute normals after.
Yet another would be to use the data you have and program an interface to highlight each vertex one at a time, write down which vertices are the mouth.
For example put a <input type="number"> on the screen. Based on the number do something with that vertex. Set a vertex color or tweak it's position, something you can do to see it. Then write down which vertices are the mouth. If you're lucky they're in some range so you only have to write down the first and last ones.
const m4 = twgl.m4;
const v3 = twgl.v3;
const gl = document.querySelector("canvas").getContext("webgl");
const vs = `
attribute vec4 a_position;
attribute vec4 a_normal;
uniform mat4 u_matrix;
varying vec4 v_color;
void main() {
// Multiply the position by the matrix.
gl_Position = u_matrix * a_position;
// Pass the normal as a color to the fragment shader.
v_color = a_normal * .5 + .5;
}
`;
const fs = `
precision mediump float;
// Passed in from the vertex shader.
varying vec4 v_color;
void main() {
gl_FragColor = v_color;
}
`;
// Yes, this sample is using TWGL (https://twgljs.org).
// You should be able to tell what it's doing from the names
// of the functions and be able to easily translate that to raw WebGL
const programInfo = twgl.createProgramInfo(gl, [vs, fs]);
const bufferInfo = twgl.createBufferInfoFromArrays(gl, {
a_position: HeadData.positions,
a_normal: HeadData.normals,
});
const numVertices = bufferInfo.numElements;
let vertexId = 0; // id of vertex we're inspecting
let newVertexId = 251; // id of vertex we want to inspect
// these are normals and get converted to colors in the shader
const black = new Float32Array([-1, -1, -1]);
const red = new Float32Array([ 1, -1, -1]);
const white = new Float32Array([ 1, 1, 1]);
const colors = [
black,
red,
white,
];
const numElem = document.querySelector("#number");
numElem.textContent = newVertexId;
document.querySelector("#prev").addEventListener('click', e => {
newVertexId = (newVertexId + numVertices - 1) % numVertices;
numElem.textContent = newVertexId;
});
document.querySelector("#next").addEventListener('click', e => {
newVertexId = (newVertexId + 1) % numVertices;
numElem.textContent = newVertexId;
});
let frameCount = 0;
function render(time) {
++frameCount;
twgl.resizeCanvasToDisplaySize(gl.canvas);
gl.viewport(0, 0, gl.canvas.width, gl.canvas.height);
gl.enable(gl.DEPTH_TEST);
gl.enable(gl.CULL_FACE);
// restore old data
// for what's in bufferInfo see
// http://twgljs.org/docs/module-twgl.html#.BufferInfo
const origData = new Float32Array(
HeadData.normals.slice(vertexId * 3, (vertexId + 3) * 3));
const oldOffset = vertexId * 3 * 4; // 4 bytes per float
gl.bindBuffer(gl.ARRAY_BUFFER, bufferInfo.attribs.a_normal.buffer);
gl.bufferSubData(gl.ARRAY_BUFFER, oldOffset, origData);
// set new vertex to a color
const newOffset = newVertexId * 3 * 4; // 4 bytes per float
gl.bufferSubData(
gl.ARRAY_BUFFER,
newOffset,
colors[(frameCount / 3 | 0) % colors.length]);
vertexId = newVertexId;
const fov = 45 * Math.PI / 180;
const aspect = gl.canvas.clientWidth / gl.canvas.clientHeight;
const zNear = 0.1;
const zFar = 50;
const projection = m4.perspective(fov, aspect, zNear, zFar);
const eye = [0, 0, 25];
const target = [0, 0, 0];
const up = [0, 1, 0];
const camera = m4.lookAt(eye, target, up);
const view = m4.inverse(camera);
const viewProjection = m4.multiply(projection, view);
const world = m4.identity();
const worldViewProjection = m4.multiply(viewProjection, world);
gl.useProgram(programInfo.program);
twgl.setBuffersAndAttributes(gl, programInfo, bufferInfo);
twgl.setUniforms(programInfo, {
u_matrix: worldViewProjection,
});
gl.drawArrays(gl.TRIANGLES, 0, numVertices);
requestAnimationFrame(render);
}
requestAnimationFrame(render);
body { margin: 0; }
canvas { width: 100vw; height: 100vh; display: block; }
.ui {
position: absolute;
left: 1em;
top: 1em;
background: rgba(0,0,0,0.9);
padding: 1em;
font-size: large;
color: white;
font-family: monospace;
}
#number {
display: inline-block;
text-align: center;
}
<script src="https://twgljs.org/dist/2.x/twgl-full.min.js"></script>
<script src="https://webglfundamentals.org/webgl/resources/headdata.js"></script>
<canvas></canvas>
<div class="ui">
<button id="prev">⬅</button>
<span>vert ndx:</span><span id="number"></span>
<button id="next">➡</button>
</div>
In Three js, I'm using a vertex shader to animate a large geometry.
I've also set up a Depth of Field effect on the output. The problem is that the Depth of Field effect doesn't seem to know about the changed positioning created in my vertex shader. It is responding as if the geometry is in the original position.
How can I update the depth information in my shader/material so that the DOF works correctly? THREE.Material has a depthWrite property, but it doesn't seem to be that...
My depth of field pass works like this:
renderer.render( this.originalScene, this.originalCamera, this.rtTextureColor, true );
this.originalScene.overrideMaterial = this.material_depth;
renderer.render( this.originalScene, this.originalCamera, this.rtTextureDepth, true );
rtTextureColor and rtTextureDepth are both WebGLRenderTargets. For some reason rtTextureColor is correct, but rtTextureDepth is not
here is my vertex shader:
int sphereIndex = int(floor(position.x/10.));
float displacementVal = displacement[sphereIndex].w;
vec3 rotationDisplacement = displacement[sphereIndex].xyz;
vNormal = normalize( normalMatrix * normal );
vec3 vNormel = normalize( normalMatrix * viewVector );
intensity = abs(pow( c - dot(vNormal, vNormel), p ));
float xVal = (displacementVal*orbitMultiplier) * sin(timeValue*rotationDisplacement.x);
float yVal = (displacementVal*orbitMultiplier) * cos(timeValue*rotationDisplacement.y);
float zVal = 0;
vec3 rotatePosition = vec3(xVal,yVal,zVal);
vec3 newPos = (position-vec3((10.*floor(position.x/10.)),0,0))+rotatePosition;
vec4 mvPosition;
mvPosition = (modelViewMatrix * vec4(newPos,1));
vViewPosition = -mvPosition.xyz;
vec4 p = projectionMatrix * mvPosition;
gl_Position = p;
Because you set the scene override material (this.originalScene.overrideMaterial = this.material_depth) before rendering into this.rtTextureDepth, the renderer doesn't use your custom vertex shader. The scene override material is a THREE.MeshDepthMaterial, which includes its own vertex shader.
One thing to try is writing a THREE.ShaderMaterial that works like THREE.MeshDepthMaterial but uses your custom vertex shader. Modifying built-in shaders isn't straightforward, but I would start from something like this:
var depthShader = THREE.ShaderLib['depth'];
var uniforms = THREE.UniformsUtils.clone(depthShader.uniforms);
var material = new THREE.ShaderMaterial({
uniforms: uniforms,
vertexShader: /* your custom vertex shader */
fragmentShader: depthShader.fragmentShader
});
You'll have to add the uniforms for your custom vertex shader and also set the uniforms for the built-in depth shaders; search WebGLRenderer.js in the three.js source for MeshDepthMaterial.
I want to get as best performance as possible with rendering simple textured shapes. The problem is with phong model it requires extra lighting (which involves calculations) + the colors are not like the one desired and needs some tweeking.
To simplify the case I've decided to use a simple flat shader, but some problems occur:
<script id="vertShader" type="shader">
varying vec2 vUv;
void main() {
vUv = uv;
gl_Position = projectionMatrix * modelViewMatrix * vec4(position,1.0);
}
</script>
<script id="fragShader" type="shader">
varying vec2 vUv;
uniform sampler2D material;
void main() {
gl_FragColor = texture2D(material, vUv);
}
</script>
Under certain camera angles some of the shelves dissapear (you can notice the darker places, and see through them), which does not occur using the phong material:
It happens with the shadow texture put inside each shelf. It's a textured cube with a shadow texture put inside each space (don't ask me why, this is just a task I got:))
I don't know what may be causing this. Maybe the loading?
Im using the standard obj loader and adding textures. Obj loader sets the material to phong and im switching it to custom shader like this:
var objLoader = new THREE.OBJLoader( manager );
objLoader.load( obj, function ( model ) {
elements[name] = model;
console.log('loaded ', name);
var img = THREE.ImageUtils.loadTexture(mat);
elements[name].traverse( function ( child ) {
if ( child instanceof THREE.Mesh ) {
child.material = new THREE.ShaderMaterial( {
uniforms: {
color: {type: 'f', value: 0.0},
material: {type: 't', value: img}
},
fragmentShader: document.getElementById('fragShader').text,
vertexShader: document.getElementById('vertShader').text,
} );
}
});
any suggestions would be helpful
Every surface is drawn in one direction (clockwise or counter-clockwise). if you are showing a surface from the other side, it will "dissapear". I think this is the problem of your own shader. -> you should render them from both sides (-> worse performance) or calculate, from which side it should render.
To optimize the performance slightly you should use a standard material from THREE. You can use them without writing your own shader.
something like:
child.material = new THREE.MeshBasicMaterial({
side: THREE.DoubleSide,
color: 0x000000
// ...
});
i created a skybox-material with textures in an own project:
function getSkyboxMaterial() {
var faceMaterials = getSkyboxFaces();
var skyboxMaterial = new THREE.MeshFaceMaterial(faceMaterials);
return skyboxMaterial;
}
function getSkyboxFaces() {
var NUMBER_OF_FACES = 6, faces = [], texture, faceMaterial, texturePath, i;
for (i = 0; i < NUMBER_OF_FACES; i++) {
texturePath = IMAGE_PREFIX + DIRECTIONS[i] + IMAGE_SUFFIX;
texture = loadFlippedTexture( texturePath );
faceMaterial = getFaceMaterial( texture );
faces.push( faceMaterial );
}
return faces;
}
function loadFlippedTexture(texturePath) {
var texture = loadTexture(texturePath);
flipTexture(texture); // This is necessary, because the skybox-textures are mirrored.
return texture;
}
function loadTexture(path) {
return THREE.ImageUtils.loadTexture(path);
}
function flipTexture(texture) {
texture.repeat.set(-1, 1);
texture.offset.set(1, 0);
return texture;
}
function getFaceMaterial(texture) {
var faceMaterial = new THREE.MeshBasicMaterial({
map: texture,
side: THREE.DoubleSide
});
return faceMaterial;
}
Hi folks,
I've got a question belongig surfaces in Three.js:
I got a bunch of Vec3 Points and want want to interpolate a surface through them. While searching, I stumbeled across beziers (three.js bezier - only as lines) and what looked more like I was searching : three.js Nurbs. I've tried to reconstruct the code, but the documentation was terrible (pages like this) and I didn't get how everything worked by reconstructing the code...
So here's the question:
Is there any easy way to get a shape out of my calculated points? (I would still be happy, if it's not interpolated).
Thank you guys!
Mat
Edit: What I want to acchieve is a surface plot. I stumbeled across http://acko.net/blog/making-mathbox/ but it's way too big for my needs...
After some try and error I found a solution: add a plane and than transform the single vertices.
// need to setup 'step', 'xStart', 'xEnd', 'yStart', 'yEnd'
// calc the variables
var width = Math.abs(-xStart+xEnd),
height = Math.abs(-yStart+yEnd);
var stepsX = width*step, stepsY = height*step;
var posX = (xStart+xEnd)/2;
var posZ = (yStart+yEnd)/2;
// add a plane and morph it to a function
var geometry = new THREE.PlaneGeometry( width, height, stepsX - 1, stepsY - 1 );
geometry.applyMatrix( new THREE.Matrix4().makeRotationX( - Math.PI / 2 ) );
var size = stepsX * (stepsY),
data = new Float32Array( size );
var count = 0, scope = {};
mesh = new THREE.Mesh( geometry, new THREE.MeshNormalMaterial( {
side : THREE.DoubleSide,
transparent: true,
shading: THREE.SmoothShading,
opacity : _opacity }));
mesh.updateMatrixWorld();
// calc y value for every vertice
for ( var i = 0; i < size; i ++ ) {
// calculate the current values
// http://stackoverflow.com/questions/11495089/how-to-get-the-absolute-position-of-a-vertex-in-three-js
var vector = mesh.geometry.vertices[i].clone();
vector.applyMatrix4(
mesh.matrixWorld
);
// set them into the scope
scope.x = vector.x + posX;
scope.y = vector.z + posZ;
// calculate point and write it in a temp array
data[i] = math.eval(term, scope);
}
// push the new vertice data
for ( var i = 0, l = geometry.vertices.length; i < l; i ++ ) {
geometry.vertices[ i ].y = data[ i ];
}
// update the new normals
geometry.computeFaceNormals();
geometry.computeVertexNormals();
// add to scene
scene.add( mesh );
Only issue is that it is not working for non static functions like tan(x). This snippet is using math.js to calc the term.
Greetings Mat