I am new to three.js so I am not sure whether I have made a mistake or my approach is wrong in the first place.
Check out my demo (use left and right arrow to navigate around):
http://www.ralphunden.de/files/kloetze/index.htm
The problem is that "inner edges" are not showing up:
http://i.imgur.com/vy8iH0J.png
The idea is to achieve a "blocky" look by using solid shapes with highlighted borders.
The best approach I have found is to use a BoxGeometry with a basic material and adding the outline with EdgesHelper.
The passed parameter shape is just a list of coordinates.
I am also not sure about adding the resulting meshes together like this (I do this so I can remove it from the scene comfortably later and it hasn't been a problem so far).
Here's the code:
function draw_shape(shape, offset, colour) {
var mesh = new THREE.Mesh();
for (var i = 0; i < shape.length; i++) {
var geometry = new THREE.BoxGeometry(BLOCKSIZE, BLOCKSIZE, BLOCKSIZE);
var material = new THREE.MeshBasicMaterial({ color: colour });
var tmp = new THREE.Mesh(geometry, material);
tmp.position.x = shape[i][0] * BLOCKSIZE - offset[0];
tmp.position.y = shape[i][1] * BLOCKSIZE - offset[1];
tmp.position.z = shape[i][2] * BLOCKSIZE - offset[2];
mesh.add(tmp)
var outline = new THREE.EdgesHelper(tmp, 0x000000);
outline.material.linewidth = 2;
mesh.add(outline);
}
return mesh;
}
Thanks in advance for any responses!
Ok. I faked it out like this:
var geometry = new THREE.BoxGeometry(BLOCKSIZE - 1 , BLOCKSIZE - 1, BLOCKSIZE - 1);
This means I made the boxes smaller by 1 which introduces a small gap to the nex box. This works and was something I tried early on but it looked very bad because the gap was too noticeable.
Then it struck me to try it again and at the same time increase BLOCKSIZE considerably so that the gap is be relatively smaller. At the same time I moved the camera further away by the same factor. (Ie. I increased BLOCKSIZE from 30 to 90 and moved the camera from about 200 to 600).
The result is satisfactory.
Related
I'm working on a simulation of the famous Prague Astronomical Clock: https://shetline.com/orloj/
I've already produced a lat/lon grid in a crude way, by 2-d drawing lines onto the image which is used for the surface of the globe, but this method doesn't work very well:
(Yes, the globe is supposed to be upside down, as it is on the original clock.)
The grid lines are very jagged this way, and they fade out near the poles because of the scaling near the poles of the original rectangular map image.
I'm trying to find a tutorial on how to do a simple bit of decoration like this by using Three.js/WebGL constructs, but no luck so far. I can get a bit of a start on what I want with the following code:
this.camera = new PerspectiveCamera(FIELD_OF_VIEW, 1);
this.scene = new Scene();
const globe = new SphereGeometry(GLOBE_RADIUS, 50, 50);
globe.rotateY(-PI / 2);
this.globeMesh = new Mesh(globe, new MeshBasicMaterial({ map: new CanvasTexture(Globe.mapCanvas) }));
this.renderer = new WebGLRenderer({ alpha: true });
this.renderer.setSize(GLOBE_PIXEL_SIZE, GLOBE_PIXEL_SIZE);
this.rendererHost.appendChild(this.renderer.domElement);
this.scene.add(this.globeMesh);
const circle = new Line(new CircleGeometry(GLOBE_RADIUS + 0.1, 50),
new LineBasicMaterial({ color: GRID_COLOR, linewidth: 20 }));
this.globeMesh.add(circle);
...by adding a circle, and making the edges of the circle poke out just a bit beyond the surface of the globe, but the result is only a very faint line that doesn't respond to my attempts to make the line thicker. I've also tried something similar with a torus floating just above the surface of the globe, but that doesn't work very well either.
What I want is some sort of additional layer for the lines, or a composite material that combines my pixel image with geometrically-defined lines (rather than painted lines), but I'm not finding the Three.js documentation very clear for figuring out how to do this.
I'm not even sure what kind of geometric model to use for these lines. They're all circles of a sort, but to have a real visual extent it seems the geometry would need to be some sort of globe-hugging, narrow, thin ribbons resting on the surface of the globe.
I ended up using (very, very short) cylinders as my lines of longitude and latitude, floating ever so slightly above the surface of the globe. It seems like anything composed of true line objects can't be guaranteed to honor linewidth, and I didn't want to settle for always-one-pixel lines.
The lines of longitude are simple hollow cylinders, as is the equator. All of the other lines of latitude are flared cylinders, wider at one end to conform to the shape of the globe.
const LINE_THICKNESS = 0.03;
const HAG = 0.01; // Sleight distance above globe that longitude/latitude lines are drawn.
// ...
// Lines of longitude
for (let n = 0; n < 24; ++n) {
const line = new CylinderGeometry(GLOBE_RADIUS + HAG, GLOBE_RADIUS + HAG, LINE_THICKNESS, 50, 1, true);
line.translate(0, -LINE_THICKNESS / 2, 0);
line.rotateX(PI / 2);
line.rotateY(n * PI / 12);
const mesh = new Mesh(line, new MeshBasicMaterial({ color: GRID_COLOR }));
this.globeMesh.add(mesh);
}
// Lines of latitude
for (let n = 1; n < 12; ++n) {
const lat = (n - 6) * PI / 12;
const r = GLOBE_RADIUS * cos(lat);
const y = GLOBE_RADIUS * sin(lat);
const r1 = r - LINE_THICKNESS * sin(lat) / 2;
const r2 = r + LINE_THICKNESS * sin(lat) / 2;
const line = new CylinderGeometry(r1 + HAG, r2 + HAG, cos(lat) * LINE_THICKNESS, 50, 8, true);
line.translate(0, -cos(lat) * LINE_THICKNESS / 2 + y, 0);
const mesh = new Mesh(line, new MeshBasicMaterial({ color: GRID_COLOR }));
this.globeMesh.add(mesh);
}
Perhaps there is a better way where the lines are part of some sort of texture applied to the globe, but this is at least giving satisfactory results for now.
When it comes to 3D animation, there are a lot of terms and concepts that I'm not familiar with (maybe a secondary question to append to this one: what are some good books to get familiar with the concepts?). I don't know what a "UV" is (in the context of 3D rendering) and I'm not familiar with what tools exist for mapping pixels on an image to points on a mesh.
I have the following image being produced by a 360-degree camera (it's actually the output of an HTML video element):
I want the center of this image to be the "top" of the sphere, and any radius of the circle in this image to be an arc along the sphere from top to bottom.
Here's my starting point (copying lines of code directly from the Three.JS documentation):
var video = document.getElementById( "texture-video" );
var scene = new THREE.Scene();
var camera = new THREE.PerspectiveCamera( 75, window.innerWidth / window.innerHeight, 0.1, 1000 );
var renderer = new THREE.WebGLRenderer();
renderer.setSize( window.innerWidth, window.innerHeight );
document.body.appendChild( renderer.domElement );
var texture = new THREE.VideoTexture( video );
texture.minFilter = THREE.LinearFilter;
texture.magFilter = THREE.LinearFilter;
texture.format = THREE.RGBFormat;
var material = new THREE.MeshBasicMaterial( { map: texture } );
var geometry = new THREE.SphereGeometry(0.5, 100, 100);
var mesh = new THREE.Mesh( geometry, material );
scene.add( mesh );
camera.position.z = 1
function animate()
{
mesh.rotation.y += 0.01;
requestAnimationFrame( animate );
renderer.render( scene, camera );
}
animate();
This produces the following:
There are a few problems:
The texture is rotated 90 degrees
The ground is distorted, although this may be fixed if the rotation is fixed?
Update: Upon further investigation of the sphere being produced, it's not actually rotated 90 degrees. Instead, the top of the image is the top of the sphere and the bottom of the image is the bottom of the sphere. This causes the left and right edges of the image to become the distorted "sideways ground" I saw
This is on the outside of the sphere. I want to project this to the inside of the sphere (and place the camera inside the sphere)
Currently if I place the camera inside the sphere I get solid black. I don't think it's a lighting issue because the Three.JS docs said that a MeshBasicMaterial didn't need lighting. I think the issue may be that the normals of all of the sphere faces point outward and I need to reverse them. I'm not sure how one would do this - but I'm pretty sure it's possible since I think this is how skyboxes work.
Doing some research I'm pretty sure I need to modify the "UV"s to fix this, I just don't know how or really what that even means...
Working Example
I forked #manthrax's CodeSandbox.io solution and updated it with my own:
https://codesandbox.io/s/4w1njkrv9
The Solution
So after spending a day researching UV mapping to understand what it meant and how it worked, I was able to sit down and scratch out some trig to map points on a sphere to points on my stereographic image. It basically came down to the following:
Use arccosine of the Y coordinate to determine the magnitude of a polar coordinate on the stereographic image
Use the arctangent of the X and Z coordinates to determine the angle of the polar coordinate on the stereographic image
Use x = Rcos(theta), y = Rsin(theta) to compute the rectangular coordinates on the stereographic image
If time permits I may draw a quick image in Illustrator or something to explain the math, but it's standard trigonometry
I went a step further after this, because the camera I was using only has a 240 degree vertical viewing angle - which caused the image to get slightly distorted (especially near the ground). By subtracting the vertical viewing angle from 360 and dividing by two, you get an angle from the vertical within which no mapping should occur. Because the sphere is oriented along the Y axis, this angle maps to a particular Y coordinate - above which there's data, and below which there isn't.
Calculate this "minimum Y value"
For all points on the sphere:
If the point is above the minimum Y value, scale it linearly so that the first such value is counted as "0" and the top of the sphere is still counted as "1" for mapping purposes
If the point is below the minimum Y value, return nothing
Weird Caveats
For some reason the code I wrote flipped the image upside down. I don't know if I messed up on my trigonometry or if I messed up on my understanding of UV maps. Whatever the case, this was trivially fixed by flipping the sphere 180 degrees after mapping
As well, I don't know how to "return nothing" in the UV map, so instead I mapped all points below the minimum Y value to the corner of the image (which was black)
With a 240-degree viewing angle the space at the bottom of the sphere with no image data was sufficiently large (on my monitor) that I could see the black circle when looking directly ahead. I didn't like the visual appearance of this, so I plugged in 270 for the vertical FOV. this leads to minor distortion around the ground, but not as bad as when using 360.
The Code
Here's the code I wrote for updating the UV maps:
// Enter the vertical FOV for the camera here
var vFov = 270; // = 240;
var material = new THREE.MeshBasicMaterial( { map: texture, side: THREE.BackSide } );
var geometry = new THREE.SphereGeometry(0.5, 200, 200);
function updateUVs()
{
var maxY = Math.cos(Math.PI * (360 - vFov) / 180 / 2);
var faceVertexUvs = geometry.faceVertexUvs[0];
// The sphere consists of many FACES
for ( var i = 0; i < faceVertexUvs.length; i++ )
{
// For each face...
var uvs = faceVertexUvs[i];
var face = geometry.faces[i];
// A face is a triangle (three vertices)
for ( var j = 0; j < 3; j ++ )
{
// For each vertex...
// x, y, and z refer to the point on the sphere in 3d space where this vertex resides
var x = face.vertexNormals[j].x;
var y = face.vertexNormals[j].y;
var z = face.vertexNormals[j].z;
// Because our stereograph goes from 0 to 1 but our vertical field of view cuts off our Y early
var scaledY = (((y + 1) / (maxY + 1)) * 2) - 1;
// uvs[j].x, uvs[j].y refer to a point on the 2d texture
if (y < maxY)
{
var radius = Math.acos(1 - ((scaledY / 2) + 0.5)) / Math.PI;
var angle = Math.atan2(x, z);
uvs[j].x = (radius * Math.cos(angle)) + 0.5;
uvs[j].y = (radius * Math.sin(angle)) + 0.5;
} else {
uvs[j].x = 0;
uvs[j].y = 0;
}
}
}
// For whatever reason my UV mapping turned everything upside down
// Rather than fix my math, I just replaced "minY" with "maxY" and
// rotated the sphere 180 degrees
geometry.rotateZ(Math.PI);
geometry.uvsNeedUpdate = true;
}
updateUVs();
var mesh = new THREE.Mesh( geometry, material );
The Results
Now if you add this mesh to a scene everything looks perfect:
One Thing I Still Don't Understand
Right around the "hole" at the bottom of the sphere there's a multi-colored ring. It almost looks like a mirror of the sky. I don't know why this exists or how it got there. Could anyone shed light on this in the comments?
Here is as close as I could get it in about 10 minutes of fiddling with a polar unwrapping of the uv's.
You can modify the polarUnwrap function to try and get a better mapping....
https://codesandbox.io/s/8nx75lkn28
You can replace the TextureLoader().loadTexture() with
//assuming you have created a HTML video element with id="video"
var video = document.getElementById( 'video' );
var texture = new THREE.VideoTexture( video );
texture.minFilter = THREE.LinearFilter;
texture.magFilter = THREE.LinearFilter;
texture.format = THREE.RGBFormat;
to get your video fed in there...
More info here:
https://threejs.org/docs/#api/textures/VideoTexture
Also this may be useful to you:
https://community.theta360.guide/t/displaying-thetas-dual-fisheye-video-with-three-js/1160
I think, it's would be quite difficult to modify the UVs, so that the stereographic projected image will fit. The UVs of a sphere are set to fit textures with equirectangular projection.
To transform the image from stereographic to equirectangular, you might want to use Panorama tools like PTGui or Hugin. Or you can use Photoshop (apply Filter > Distort > Polar Coordinates > polar to rectangular).
Equirectangular projection of the image (with Photoshop), resized to 2:1 aspect ratio (not necessary for texture)
If you want the texture to be inside the sphere (or normals flipped), you are able to set the material to THREE.BackSide.
var material = new THREE.MeshBasicMaterial( { map: texture, side: THREE.BackSide } );
Maybe, you have to flip the texture horizontally then: How to flip a Three.js texture horizontally
I drawed a grid based system on canvas using PIXI.js.
I'm trying to animate the thing, first each particle position.y is -200, then using Tween.js I'm trying to make them fall.
I change the position to the correct position, which is particle._y.
As you notice you will see after falling there are some empty spaces and CPU is over heating.
http://jsbin.com/wojosopibe/1/edit?html,js,output
function animateParticles() {
for (var k = 0; k < STAGE.children.length; k++) {
var square = STAGE.children[k];
new Tween(square, 'position.y', square._y, Math.floor(Math.random() * 80), true);
}
}
I think I'm doing something wrong.
Can someone please explain me what I'm doing wrong and why there are some empty spaces after falling?
The reason for the empty spaces is that some of your animations are not starting. The cause is in this line:
new Tween(square, 'position.y', square._y, Math.floor(Math.random() * 80), true);
Looking at your function definition for Tween.js, I see this:
function Tween(object, property, value, frames, autostart)
The fourth parameter is frames. I'm assuming this is the number of frames required to complete the animation.
Well your Math.floor function willl sometimes return zero, meaning the animation will have no frames and won't start!!
You can fix this by using math.ceil() instead. This way there will always be at least 1 frame for the animation:
new Tween(square, 'position.y', square._y, Math.ceil(Math.random() * 80), true);
Now, as for performance, I would suggest setting this up differently...
Animating all those graphics objects is very intensive. My suggestion would be to draw a single red square, and then use a RenderTexture to generate a bitmap from the square. Then you can add Sprites to the stage, which perform WAY better when animating.
//Cretae a single graphics object
var g = new PIXI.Graphics();
g.beginFill(0xFF0000).drawRect(0, 0, 2, 2).endFill();
//Render the graphics into a Texture
var renderTexture = new PIXI.RenderTexture(RENDERER, RENDERER.width, RENDERER.height);
renderTexture.render(g);
for (var i = 0; i < CONFIG.rows; i++) {
for (var j = 0; j < CONFIG.cols; j++) {
var x = j * 4;
var y = i * 4;
//Add Sprites to the stage instead of Graphics
var PARTICLE = new PIXI.Sprite(renderTexture);
PARTICLE.x = x;
PARTICLE.y = -200;
PARTICLE._y = H - y;
STAGE.addChild(PARTICLE);
}
}
This link will have some more examples of a RenderTexture:
http://pixijs.github.io/examples/index.html?s=demos&f=render-texture-demo.js&title=RenderTexture
Hello I have created simple renderer for my 3D objects (php generated).
I am successfully rendering all the object, but I got some big issues with textures.
This is my texture: (512x512)
I'd like to use it on my object, but this is what happens:
I can't figure how to display not stretched nicely looking grid in 1:1 ratio.
I think i need to calculate the repeats somehow. Any idea?
This is how im setting up the texture:
var texture = THREE.ImageUtils.loadTexture(basePath + '/images/textures/dt.jpg', new THREE.UVMapping());
texture.wrapT = THREE.RepeatWrapping;
texture.wrapS = THREE.RepeatWrapping;
texture.repeat.set(1,1);
stairmaterials[0] = new THREE.MeshBasicMaterial(
{
side: THREE.DoubleSide,
map: texture
});
I tried to change repeat values to achieve 1:1 not stretched ration, but I wasn't successful at all. I all got worse and worse.
I also use following algorithm to calculate vertex UVS:
geom.computeBoundingBox();
var max = geom.boundingBox.max;
var min = geom.boundingBox.min;
var offset = new THREE.Vector2(0 - min.x, 0 - min.z);
var range = new THREE.Vector2(max.x - min.x, max.z - min.z);
geom.faceVertexUvs[0] = [];
var faces = geom.faces;
for (i = 0; i < geom.faces.length; i++) {
var v1 = geom.vertices[faces[i].a];
var v2 = geom.vertices[faces[i].b];
var v3 = geom.vertices[faces[i].c];
geom.faceVertexUvs[0].push([
new THREE.Vector2(( v1.x + offset.x ) / range.x, ( v1.z + offset.z ) / range.z),
new THREE.Vector2(( v2.x + offset.x ) / range.x, ( v2.z + offset.z ) / range.z),
new THREE.Vector2(( v3.x + offset.x ) / range.x, ( v3.z + offset.z ) / range.z)
]);
}
geom.uvsNeedUpdate = true;
Is your mesh imported or generated?
You are iterating through each and every triangle, and doing some sort of scale / projection. If all of these boxes are a single mesh, this will work to an extent, you will scale each and every box by the same ratio. They also exist in the same model space, so yes, it will be scaled properly and aligned.
If they are not the same mesh, it can fail.
The vertices you are looping through exist in model space. Each one of these boxes, could exist in their own scale, at there own arbitrary local positions. So, you'd have to transform them to world space using the object's .modelMatrix . When you apply this matrix you will bring all of those vertices into the same space. Try doing this without the bounding box, or just grab the bounding box from one object and you will see a uniform scale, and proper alignment.
You wont be able to use this algorithm to get the map to show in 3d though, you are doing a 2d projection. You can check each triangle's orientation, and then choose a different projection direction though.
I am learning ways of manipulating HTML 5 Canvas, and decided to write a simple game, scroller arcade, for better comprehension. It is still at very beginning of development, and rendering a background (a moving star field), I encountered little, yet annoying issue - some of the stars are blinking, while moving. Here's the code I used:
var c = document.getElementById('canv');
var width = c.width;
var height = c.height;
var ctx = c.getContext('2d');//context
var bgObjx = new Array;
var bgObjy = new Array;
var bgspeed = new Array;
function init(){
for (var i = 1; i < 50; i++){
bgObjx.push(Math.floor(Math.random()*height));
bgObjy.push(Math.floor(Math.random()*width));
bgspeed.push(Math.floor(Math.random()*4)+1);
}
setInterval('draw_bg();',50);
}
function draw_bg(){
var distance; //distace to star is displayed by color
ctx.fillStyle = "rgb(0,0,0)";
ctx.fillRect(0,0,width,height);
for (var i = 0; i < bgObjx.length; i++){
distance = Math.random() * 240;
if (distance < 100) distance = 100;//Don't let it be too dark
ctx.fillStyle = "rgb("+distance+","+distance+","+distance+")";
ctx.fillRect(bgObjx[i], bgObjy[i],1,1);
bgObjx[i] -=bgspeed[i];
if (bgObjx[i] < 0){//if star has passed the border of screen, redraw it as new
bgObjx[i] += width;
bgObjy[i] = Math.floor(Math.random() * height);
bgspeed[i] = Math.floor (Math.random() * 4) + 1;
}
}
}
As you can see, there are 3 arrays, one for stars (objects) x coordinate, one for y, and one for speed variable. Color of a star changes every frame, to make it flicker. I suspected that color change is the issue, and binded object's color to speed:
for (var i = 0; i < bgObjx.length; i++){
distance = bgspeed[i]*30;
Actually, that solved the issue, but I still don't get how. Would any graphics rendering guru bother to explain this, please?
Thank you in advance.
P.S. Just in case: yes, I've drawn some solutions from existing Canvas game, including the color bind to speed. I just want to figure out the reason behind it.
In this case, the 'Blinking' of the stars is caused by a logic error in determining the stars' distance (color) value.
distance = Math.random() * 240; // This is not guaranteed to return an integer
distance = (Math.random() * 240)>>0; // This rounds down the result to nearest integer
Double buffering is usually unnecessary for canvas, as browsers will not display the drawn canvas until the drawing functions have all been completed.
Used to see a similar effect when programming direct2d games. Found a double-buffer would fix the flickering.
Not sure how you would accomplish a double(or triple?)-buffer with the canvas tag, but thats the first thing I would look into.