Texture which maintain proportion as scene background in three.js - javascript

i have a scene where i want to add a background, i have a png image (2k resolution), but when i try it on pc it is of the right size, on mobile is a lot "disproportionated"
My code is the following:
var texture = THREE.ImageUtils.loadTexture('img/texture.png');
And to add it as background is just this:
scene = new THREE.Scene();
scene.background = texture;
I've seen with some search that maybe i have to create a separate scene for the background, but i don't think it is the easiest solution, maybe there are a better solution for this?
(As always, sorry for my bad english)

You can try approaching this with THREE.ShaderMaterial
class MyBackgroundPlane extends THREE.Mesh{
constructor(){
super(
new THREE.PlaneBufferGeometry(2,2,1,1),
new THREE.ShaderMaterial({
uniforms:{
uTexture: { value: null },
uAspect: { value: 1 }
},
vertexShader: `
varying vec2 vUv;
uniform float uAspect;
void main(){
vUv = uv; //pass coordinates to screen
vUv.x *= uAspect; //scale the coordinates
gl_Position = vec4(position.xy, 1., 1.);
}
`,
fragmentShader:`
varying vec2 vUv;
uniform sampler2D uTexture;
void main(){
gl_FragColor = texture2D( uTexture, vUv );
}
`
})
)
this.frustumCulled = false
}
setAspect( aspect ){
this.material.uniforms.uAspect.value = aspect
}
setTexture( texture ){
this.material.uniforms.uTexture.value = texture
}
}
You kinda have to figure out what needs to happen when its portrait and when its landscape.
One approach could be to use uniform vec2 uScale; and then set the vertical and horizontal aspects differently depending on the orientation.
The same thing could be done with the scene graph by attaching a regular plane to a camera for example, and then managing it's scale.

As an alternative, you can use a CSS based background:
#background {
background-image: url('http://youring.com/test/img/texture.png');
position: fixed;
top: 0;
left: 0;
height: 100%;
width: 100%;
z-index: -1;
}
Just create your renderer like this so it's possible to see through the canvas:
renderer = new THREE.WebGLRenderer( { antialias: true, alpha: true } );
DEMO: https://jsfiddle.net/f2Lommf5/5052/

Related

Change values of Phong Shader with sliders

I am trying to implement a 3D scene with WebGL and Javascript. The final scene is supposed to show a cuboid with smaller cuboids, pyramids and spheres on all sides. The smaller spheres have to rotate with the big cuboid. I implemented Phong Shading, this works fine. Now I want to change the values of shininess, lightPos, and lightIntensity with three sliders on the right of my canvas that displays the scene. The slider for shininess is apparently not working and I'm even more struggeling with the other two sliders, as lightPos and lightIntensity are vec3 elements that are constants. The code for the three variables looks like this:
const vec3 lightPos = vec3(1.0,-1.0,1.0);
float shininess = 16.0;
const vec3 lightIntensity = vec3(1.0, 1.0, 1.0);
At the moment the slider for shininess looks like this:
<input id="shininess" type="range" min="1" max="50"></input>
var shininessElement = document.getElementById("shininess");
shininessElement.onchange = function(){
shininess = shininessElement.value;
window.requestAnimationFrame(animate);
I'm pretty sure that I did something terribly wrong but a research didn't lead to any result and I've no idea what to do next, so I'd really appreciate your help.
If you need the complete code, please let me know.
You probably should read some other tutorials on WebGL. In particular you can't set shininess unless you make it a uniform, then look up the uniform's location and set it with gl.uniform???.
Here's simple example of using a slider to set a value and then sending that value to a shader by setting a uniform variable in the shader.
const gl = document.querySelector("canvas").getContext('webgl');
const vs = `
void main() {
gl_Position = vec4(0, 0, 0, 1);
gl_PointSize = 100.0;
}
`;
const fs = `
precision mediump float;
uniform float shininess;
void main() {
gl_FragColor = vec4(shininess, 0, 0, 1);
}
`;
// compiles shaders, links program
const prg = twgl.createProgram(gl, [vs, fs]);
const shininessLocation = gl.getUniformLocation(prg, "shininess");
let shininess = .5;
draw();
function draw() {
gl.useProgram(prg);
gl.uniform1f(shininessLocation, shininess);
gl.drawArrays(gl.POINTS, 0, 1);
}
document.querySelector("input").addEventListener('input', (e) => {
shininess = e.target.value / 100;
draw();
});
<script src="https://twgljs.org/dist/3.x/twgl.min.js"></script>
<canvas></canvas>
<input type="range" min="0" max="100" value="50" />

ThreeJS: Adding shadows to custom shaders r75

In the past it was possible to incorporate the shadow calculations in custom shaders as described here and summed up here.
With r75, the lighting and shadow systems seem to have been merged, changing this. I attempted to work my way through the source to to understand but the abstraction/modules are a little tricky to follow.
I've distilled my shaders down to what I have so far:
Vertex:
#chunk(shadowmap_pars_vertex);
void main() {
vec4 worldPosition = modelMatrix * vec4(position, 1.0);
vec4 mvPosition = modelViewMatrix * vec4(position, 1.0);
gl_Position = projectionMatrix * mvPosition;
#chunk(shadowmap_vertex);
}
Fragment
#chunk(common);
#chunk(lights_pars);
#chunk(shadowmap_pars_fragment);
void main() {
//#if ( NUM_DIR_LIGHTS > 0 )
IncidentLight directLight;
DirectionalLight directionalLight;
float shadowValue = 1.0;
for ( int i = 0; i < NUM_DIR_LIGHTS; i ++ ) {
directionalLight = directionalLights[ i ];
shadowValue = getShadow( directionalShadowMap[ i ], directionalLight.shadowMapSize, directionalLight.shadowBias, directionalLight.shadowRadius, vDirectionalShadowCoord[ i ] );
}
//#endif
gl_FragColor = vec4(vec3(shadowValue), 1.0);
}
I pulled the directional light loop from the lights_template chunk. Unfortunately, shadowValue always seems to return 1.0, but it does work and the shader renders correctly otherwise.
My JS has the appropriate castShadow and receiveShadow set. Other meshes using Phong render shadows correctly.
Thanks so much in advance.
Edit:
Adding material.lights = true; to the ShaderMaterial makes something appear, however the value of shadowValue in the fragment shader is clearly incorrect on the side of the sphere facing away from the light. Screenshots attached.

Three js Shader Material modify depth buffer

In Three js, I'm using a vertex shader to animate a large geometry.
I've also set up a Depth of Field effect on the output. The problem is that the Depth of Field effect doesn't seem to know about the changed positioning created in my vertex shader. It is responding as if the geometry is in the original position.
How can I update the depth information in my shader/material so that the DOF works correctly? THREE.Material has a depthWrite property, but it doesn't seem to be that...
My depth of field pass works like this:
renderer.render( this.originalScene, this.originalCamera, this.rtTextureColor, true );
this.originalScene.overrideMaterial = this.material_depth;
renderer.render( this.originalScene, this.originalCamera, this.rtTextureDepth, true );
rtTextureColor and rtTextureDepth are both WebGLRenderTargets. For some reason rtTextureColor is correct, but rtTextureDepth is not
here is my vertex shader:
int sphereIndex = int(floor(position.x/10.));
float displacementVal = displacement[sphereIndex].w;
vec3 rotationDisplacement = displacement[sphereIndex].xyz;
vNormal = normalize( normalMatrix * normal );
vec3 vNormel = normalize( normalMatrix * viewVector );
intensity = abs(pow( c - dot(vNormal, vNormel), p ));
float xVal = (displacementVal*orbitMultiplier) * sin(timeValue*rotationDisplacement.x);
float yVal = (displacementVal*orbitMultiplier) * cos(timeValue*rotationDisplacement.y);
float zVal = 0;
vec3 rotatePosition = vec3(xVal,yVal,zVal);
vec3 newPos = (position-vec3((10.*floor(position.x/10.)),0,0))+rotatePosition;
vec4 mvPosition;
mvPosition = (modelViewMatrix * vec4(newPos,1));
vViewPosition = -mvPosition.xyz;
vec4 p = projectionMatrix * mvPosition;
gl_Position = p;
Because you set the scene override material (this.originalScene.overrideMaterial = this.material_depth) before rendering into this.rtTextureDepth, the renderer doesn't use your custom vertex shader. The scene override material is a THREE.MeshDepthMaterial, which includes its own vertex shader.
One thing to try is writing a THREE.ShaderMaterial that works like THREE.MeshDepthMaterial but uses your custom vertex shader. Modifying built-in shaders isn't straightforward, but I would start from something like this:
var depthShader = THREE.ShaderLib['depth'];
var uniforms = THREE.UniformsUtils.clone(depthShader.uniforms);
var material = new THREE.ShaderMaterial({
uniforms: uniforms,
vertexShader: /* your custom vertex shader */
fragmentShader: depthShader.fragmentShader
});
You'll have to add the uniforms for your custom vertex shader and also set the uniforms for the built-in depth shaders; search WebGLRenderer.js in the three.js source for MeshDepthMaterial.

Shaders performance with threeJS

I want to get as best performance as possible with rendering simple textured shapes. The problem is with phong model it requires extra lighting (which involves calculations) + the colors are not like the one desired and needs some tweeking.
To simplify the case I've decided to use a simple flat shader, but some problems occur:
<script id="vertShader" type="shader">
varying vec2 vUv;
void main() {
vUv = uv;
gl_Position = projectionMatrix * modelViewMatrix * vec4(position,1.0);
}
</script>
<script id="fragShader" type="shader">
varying vec2 vUv;
uniform sampler2D material;
void main() {
gl_FragColor = texture2D(material, vUv);
}
</script>
Under certain camera angles some of the shelves dissapear (you can notice the darker places, and see through them), which does not occur using the phong material:
It happens with the shadow texture put inside each shelf. It's a textured cube with a shadow texture put inside each space (don't ask me why, this is just a task I got:))
I don't know what may be causing this. Maybe the loading?
Im using the standard obj loader and adding textures. Obj loader sets the material to phong and im switching it to custom shader like this:
var objLoader = new THREE.OBJLoader( manager );
objLoader.load( obj, function ( model ) {
elements[name] = model;
console.log('loaded ', name);
var img = THREE.ImageUtils.loadTexture(mat);
elements[name].traverse( function ( child ) {
if ( child instanceof THREE.Mesh ) {
child.material = new THREE.ShaderMaterial( {
uniforms: {
color: {type: 'f', value: 0.0},
material: {type: 't', value: img}
},
fragmentShader: document.getElementById('fragShader').text,
vertexShader: document.getElementById('vertShader').text,
} );
}
});
any suggestions would be helpful
Every surface is drawn in one direction (clockwise or counter-clockwise). if you are showing a surface from the other side, it will "dissapear". I think this is the problem of your own shader. -> you should render them from both sides (-> worse performance) or calculate, from which side it should render.
To optimize the performance slightly you should use a standard material from THREE. You can use them without writing your own shader.
something like:
child.material = new THREE.MeshBasicMaterial({
side: THREE.DoubleSide,
color: 0x000000
// ...
});
i created a skybox-material with textures in an own project:
function getSkyboxMaterial() {
var faceMaterials = getSkyboxFaces();
var skyboxMaterial = new THREE.MeshFaceMaterial(faceMaterials);
return skyboxMaterial;
}
function getSkyboxFaces() {
var NUMBER_OF_FACES = 6, faces = [], texture, faceMaterial, texturePath, i;
for (i = 0; i < NUMBER_OF_FACES; i++) {
texturePath = IMAGE_PREFIX + DIRECTIONS[i] + IMAGE_SUFFIX;
texture = loadFlippedTexture( texturePath );
faceMaterial = getFaceMaterial( texture );
faces.push( faceMaterial );
}
return faces;
}
function loadFlippedTexture(texturePath) {
var texture = loadTexture(texturePath);
flipTexture(texture); // This is necessary, because the skybox-textures are mirrored.
return texture;
}
function loadTexture(path) {
return THREE.ImageUtils.loadTexture(path);
}
function flipTexture(texture) {
texture.repeat.set(-1, 1);
texture.offset.set(1, 0);
return texture;
}
function getFaceMaterial(texture) {
var faceMaterial = new THREE.MeshBasicMaterial({
map: texture,
side: THREE.DoubleSide
});
return faceMaterial;
}

Passing WebRTC video into geometry with GLSL Shader

This is my first time playing around with Vertex Shaders in a WebGL context. I want to texture a primitive with a video, but instead of just mapping the video into the surface I;m trying to translate the luma of the video into vertex displacement. This is kind of like the Rutt Etra, but in a digital format. A bright pixel should push the vertex forward, while a darker pixel does the inverse. Can anyone tell me what I'm doing wrong? I can't find a reference for this error.
When compiling my code, I get the following when using sampler2D and texture2D:
Mozilla/5.0 (Macintosh; Intel Mac OS X 10_8_4) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/29.0.1547.65 Safari/537.36 | WebGL 1.0 (OpenGL ES 2.0 Chromium) | WebKit | WebKit WebGL | WebGL GLSL ES 1.0 (OpenGL ES GLSL ES 1.0 Chromium) Three.js:264
ERROR: 0:57: 'ftransform' : no matching overloaded function found
ERROR: 0:57: 'assign' : cannot convert from 'const mediump float' to 'Position highp 4-component vector of float'
ERROR: 0:60: 'gl_TextureMatrix' : undeclared identifier
ERROR: 0:60: 'gl_TextureMatrix' : left of '[' is not of type array, matrix, or vector
ERROR: 0:60: 'gl_MultiTexCoord0' : undeclared identifier
Three.js:257
<!doctype html>
<html>
<head>
<title>boiler plate for three.js</title>
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, user-scalable=no, minimum-scale=1.0, maximum-scale=1.0">
<script src="vendor/three.js/Three.js"></script>
<script src="vendor/three.js/Detector.js"></script>
<script src="vendor/three.js/Stats.js"></script>
<script src="vendor/threex/THREEx.screenshot.js"></script>
<script src="vendor/threex/THREEx.FullScreen.js"></script>
<script src="vendor/threex/THREEx.WindowResize.js"></script>
<script src="vendor/threex.dragpancontrols.js"></script>
<script src="vendor/headtrackr.js"></script>
<style>
body {
overflow : hidden;
padding : 0;
margin : 0;
color : #222;
background-color: #BBB;
font-family : arial;
font-size : 100%;
}
#info .top {
position : absolute;
top : 0px;
width : 100%;
padding : 5px;
text-align : center;
}
#info a {
color : #66F;
text-decoration : none;
}
#info a:hover {
text-decoration : underline;
}
#info .bottom {
position : absolute;
bottom : 0px;
right : 5px;
padding : 5px;
}
</style>
</head>
<body>
<!-- three.js container -->
<div id="container"></div>
<!-- info on screen display -->
<div id="info">
<!--<div class="top">
LearningThree.js
boiler plate for
three.js
</div>-->
<div class="bottom" id="inlineDoc" >
- <i>p</i> for screenshot
</div>
</div>
<canvas id="compare" width="320" height="240" style="display:none"></canvas>
<video id="vid" autoplay loop></video>
<script type="x-shader/x-vertex" id="vertexShader">
varying vec2 texcoord0;
void main()
{
// perform standard transform on vertex
gl_Position = ftransform();
// transform texcoords
texcoord0 = vec2(gl_TextureMatrix[0] * gl_MultiTexCoord0);
}
</script>
<script type="x-shader/x-vertex" id="fragmentShader">
varying vec2 texcoord0;
uniform sampler2D tex0;
uniform vec2 imageSize;
uniform float coef;
const vec4 lumcoeff = vec4(0.299,0.587,0.114,0.);
void main (void)
{
vec4 pixel = texture2D(tex0, texcoord0);
float luma = dot(lumcoeff, pixel);
gl_FragColor = vec4((texcoord0.x / imageSize.x), luma, (texcoord0.y / imageSize.y) , 1.0);
}
</script>
<script type="text/javascript">
var stats, scene, renderer;
var camera, cameraControls;
var videoInput = document.getElementById('vid');
var canvasInput = document.getElementById('compare');
var projector = new THREE.Projector();
var gl;
var mesh,
cube,
attributes,
uniforms,
material,
materials;
var videoTexture = new THREE.Texture( videoInput );
if( !init() ) animate();
// init the scene
function init(){
if( Detector.webgl ){
renderer = new THREE.WebGLRenderer({
antialias : true, // to get smoother output
preserveDrawingBuffer : true // to allow screenshot
});
renderer.setClearColorHex( 0xBBBBBB, 1 );
// uncomment if webgl is required
//}else{
// Detector.addGetWebGLMessage();
// return true;
}else{
renderer = new THREE.CanvasRenderer();
gl=renderer;
}
renderer.setSize( window.innerWidth, window.innerHeight );
document.getElementById('container').appendChild(renderer.domElement);
// create a scene
scene = new THREE.Scene();
// put a camera in the scene
camera = new THREE.PerspectiveCamera( 23, window.innerWidth / window.innerHeight, 1, 100000 );
camera.position.z = 0;
scene.add( camera );
//
// // create a camera contol
// cameraControls = new THREEx.DragPanControls(camera)
// transparently support window resize
// THREEx.WindowResize.bind(renderer, camera);
// allow 'p' to make screenshot
THREEx.Screenshot.bindKey(renderer);
// allow 'f' to go fullscreen where this feature is supported
if( THREEx.FullScreen.available() ){
THREEx.FullScreen.bindKey();
document.getElementById('inlineDoc').innerHTML += "- <i>f</i> for fullscreen";
}
materials = new THREE.MeshLambertMaterial({
map : videoTexture
});
attributes = {};
uniforms = {
tex0: {type: 'mat2', value: materials},
imageSize: {type: 'f', value: []},
coef: {type: 'f', value: 1.0}
};
//Adding a directional light source to see anything..
var directionalLight = new THREE.DirectionalLight(0xffffff);
directionalLight.position.set(1, 1, 1).normalize();
scene.add(directionalLight);
// video styling
videoInput.style.position = 'absolute';
videoInput.style.top = '50px';
videoInput.style.zIndex = '100001';
videoInput.style.display = 'block';
// set up camera controller
headtrackr.controllers.three.realisticAbsoluteCameraControl(camera, 1, [0,0,0], new THREE.Vector3(0,0,0), {damping : 1.1});
var htracker = new headtrackr.Tracker();
htracker.init(videoInput, canvasInput);
htracker.start();
// var stats = new Stats();
// stats.domElement.style.position = 'absolute';
// stats.domElement.style.top = '0px';
// document.body.appendChild( stats.domElement );
document.addEventListener('headtrackrStatus',
function (event) {
if (event.status == "found") {
addCube();
}
}
);
}
// animation loop
function animate() {
// loop on request animation loop
// - it has to be at the begining of the function
// - see details at http://my.opera.com/emoller/blog/2011/12/20/requestanimationframe-for-smart-er-animating
requestAnimationFrame( animate );
// do the render
render();
// update stats
//stats.update();
}
function render() {
// convert matrix of every frame of video -> texture
uniforms.tex0 = materials;
uniforms.coef = 0.2;
uniforms.imageSize.x = window.innerWidth;
uniforms.imageSize.y = window.innerHeight;
// update camera controls
// cameraControls.update();
if( videoInput.readyState === videoInput.HAVE_ENOUGH_DATA ){
videoTexture.needsUpdate = true;
}
// actually render the scene
renderer.render( scene, camera );
}
function addCube(){
material = new THREE.ShaderMaterial({
uniforms: uniforms,
attributes: attributes,
vertexShader: document.getElementById('vertexShader').textContent,
fragmentShader: document.getElementById('fragmentShader').textContent,
transparent: true
});
//The cube
cube = new THREE.Mesh(new THREE.CubeGeometry(40, 30, 10, 1, 1, 1, material), new THREE.MeshFaceMaterial());
cube.overdraw = true;
scene.add(cube);
}
</script>
</body>
</html>
The primary problem here is that you are using the old GLSL reserved words that were intended for programmable / fixed-function interop. In OpenGL ES 2.0 things like gl_MultiTexCoord0 and gl_TextureMatrix [n] are not defined, because they completely removed the legacy fixed-function vertex array baggage that regular OpenGL has to deal with. These reserved words let you have matrix/vertex array state per-texture unit; they do not exist in OpenGL ES, this was their purpose in OpenGL.
To get around this, you have to use generic vertex attributes (e.g. attribute vec2 tex_st) instead of having a 1:1 mapping between texture coordinate pointers and texture units. Likewise, there is no texture matrix associated with each texture unit. To duplicate the functionality of texture matrices, you need to use matrix uniforms in your vertex/fragment shader.
To be honest, I cannot remember the last time I actually found it useful to have a separate texture matrix / texture coordinate pointer for each texture unit when using shaders... I often have 4 or 5 different textures and only need maybe 1 or 2 sets of texture coordinates. It is no big loss.
The kicker here is ftransform (...). This is intended to make it possible to write 1-line vertex shaders in OpenGL that behave the same way as the fixed-function pipeline. You must have copied and pasted a shader that was written for OpenGL 2.x or 3.x (compatibility). Explaining how to fix everything in this shader could be a real chore, you might have to learn more about GLSL before most of what I just wrote makes sense :-\

Categories