I have several object in a scene. And I want them moving on a wide plane, which will receive shadows. If I use only one point light or a very large directional light the shadow of a object when it moved to the corner will become very rough (rough shadow edge) . like this:
So I’m wondering should I use a light for every object above them and follow them in order to cast a high quality shadow edge? And in some case I can not set the shadow.mapsize higher .
There are a number of things that control shadow quality.
What it looks like to me, is your shadow cameras left / right / top / bottom are set too large, and its casting the shadow over too large an area.
Shadow casting lights are pretty expensive, so I would avoid trying to compensate by having multiple lights.
Try reducing the light.shadow.camera left/right/top/bottom to focus the map better on where you're objects are. Also trying increasing shadowmap size to something like 1024 or 2048 square.
Another thing that helps is to use THREE.PCFShadowmapping instead of the default. That will smooth the edges of your shadows.
edit: Also check out PCFSoftShadowMapping.. slower to render, but greatly smooths shadow edges.
Getting the shadow mapping right is a tricky process and requires a lot of fiddling.
Also, try attaching a DirectionalLightHelper to your light and also a CameraHelper to the lights camera. Those will help you figure out how big the shadow casting area is and where things are pointing.
https://imgur.com/a/Zo80z
Related
A friend just recommended looking into WebGL instead of css transitions. I have a set of polygons that make up a 2d board game.
Basically, the app moves the player space by space starting at the top of the "C" and we want to make first person view of moving to the next space in the sequence.
The points are plotted and I was thinking in terms of normalizing each shape, then rotating them into the proper direction, then adding perspective by transforming translateZ, and finally transitioning them along nthe interior angles from space to space while thinking of how to sequence those transitions between spaces.
Is there an easier way to move a WebGL camera through the spaces instead of pushing the polygons through transitions to simulate perspective? Perhaps a library that helps with this?
Thanks all!
WebGL doesn't have a camera. WebGL is a rasterization library. It draws pixels (or 4 value things) into arrays (the canvas, textures, renderbuffers). Cameras are something you implement yourself in JavaScript or you use a library like Three.js that has them implemented for you.
I'm trying to achieve a permanent head-coupled perspective without using the full headtrackr library. My head won't be moving, but it won't be directly in front of the screen.
I have a little demo that you can download and run with python -m SimpleHTTPServer 8000
The code is adapted from mainly this headtrackr example and part of the headtrackr source
My expectations are based off this diagram:
In the third image, I imagine slightly swiveling my monitor counter-clockwise from above. This should be equivalent to reducing Z and making X less than zero. I expect my monitor to show the middle image, but instead I see something like this:
I think the "window" I'm looking through is the XY-plane, but shouldn't it stretch like the middle orange rectangle in the first diagram? Here's another window that stays fixed: http://kode80.com/2012/04/09/holotoy-perspective-in-webgl/ to see what I mean by "window."
Are off-axis perspective and head-tracking unrelated? How do I get a convincing illusion of off-axis perspective in THREE.js?
I think you can accomplish what you're aiming for with setViewOffset. Your diagram looks a little off to me. Perhaps its because there is no qube in the off-axis projection, but I think the point is that the frustum should remain framed on a fixed point without rotating the camera which would introducer perspective distortion.
To accomplish this with setViewOffset I would set the fullWidth and fullHeight to some extra large size. The view offset the will be a window in that oversized view. As the user moves that window will be offset in the opposite direction of the viewer.
http://threejs.org/docs/#Reference/Cameras/PerspectiveCamera
3D Projection mapping can be broken into a corner-pinning texture step and a perspective adjustment step. I think they're somewhat unrelated.
In Three.js you can make a quad surface (composed of two triangle Face3) and map a texture onto the surface. Then move the corners of the quad in XY (not Z). I don't think this step introduces perspective artifacts other than what's necessary for minor deformations in a quad texture. I think I'm talking about banding issues and nearest neighbor artifacts, not 3d perspective errors. The size of these artifacts depends on how the projector is shining on the object. If the projector is nicely perpendicular to the surface, very little corner-mapping is necessary. If you're using a monitor instead of a projector, then no corner-pinning is necessary.
Next is the perspective adjustment step. You have to adjust the content of the texture based on where the user is in relation to the real-life surface. I believe you can do this with an XYZ distance from the physical viewer to the center of the screen surface, a scaling factor between pixels and real-life size, and the pixel dimensions of the surface.
In my demo, the blueish faces of my cubes point in the positive Z direction. When I rotate my monitor in the real-life world, they continue to point in the positive Z direction of their monitor world. The diagram that I posted is a little misleading because the orange box in the middle picture is locally rotated to compensate for the rotating of the real-life monitor world. That orange box's front face is no longer pointing exactly in the positive Z direction of its monitor world.
Outside of Three.js, in Processing there are some techniques for projection mapping.
This one may be the simplest, although I haven't tried it myself: http://blogs.bl0rg.net/netzstaub/2008/08/24/wiimote-headtracking-in-processing/
SurfaceMapper for Processing has support for real-life curved surfaces (not just flat rectangles), but it only works for Processing before Processing 2.0.
If anyone develops a SurfaceMapper library for Three.js that would be really cool! I'd love to design a virtual world, put cameras in the world, have each camera consider real-life viewer perspective, and then put those rendered textures on real-life displays.
You need to adjust the perspective matrix. It's built with -left +right -bottom +top. Changing these will produce the effect you are looking for.
I'm trying to draw a tiled background using Javascript on an HTML5 canvas, but it's not working because shapes that intersect the edges of the canvas don't wrap around to the other side. (Just to be clear: these are static shapes--no motion in time is involved.) How can I get objects interrupted by one side of the canvas to wrap around to the other side?
Basically I'm looking for the "wraparound" effect that many video games use--most famously Asteroids; I just want that effect for a static purpose here. This page seems to be an example that shows it is possible. Note how an asteroid, say, on the right edge of the screen (whether moving or not) continues over to the left edge. Or for that matter, an object in the corner is split between all four corners. Again, no motion is necessarily involved.
Anyone have any clues how I might be able to draw, say, a square or a line that wraps around the edges? Is there perhaps some sort of option for canvas or Javascript? My google searches using obvious keywords have come up empty.
Edit
To give a little more context, I'm basing my work off the example here: Canvas as Background Image. (Also linked from here: Use <canvas> as a CSS background.) Repeating the image is no problem. The problem is getting the truncated parts of shapes to wrap around to the other side.
I'm not sure how you have the tiles set-up, however, if they are all part of a single 'wrapper' slide which has it's own x,x at say 0,0, then you could actually just draw it twice, or generate a new slide as needed. Hopefully this code will better illustrate the concept.
// Here, the 'tilegroup' is the same size of the canvas
function renderbg() {
tiles.draw(tiles.posx, tiles.posy);
if(tiles.posx < 0)
tiles.draw(canvas.width + tiles.posx, tiles.posy);
if(tiles.posx > 0)
tiles.draw(-canvas.width + tiles.posx, tiles.posy);
}
So basically, the idea here is to draw the groupings of tiles twice. Once in it's actual position, and again to fill in the gap. You still need to calculate when the entire group leaves the canvas completely, and then reset it, but hopefully this leads you in the correct direction!
You could always create your tillable image in canvas, generate a toDataUrl(), and then assign that data url as a background to something and let CSS do the tiling.. just a thought.
Edit: If you're having trouble drawing a tillable image, you could create a 3*widthx3*width canvas, draw on it as regular (assuming you grab data from the center square of data as the final result), and then see if you can't draw from subsets of the canvas to itself. Looks like you'd have to use:
var myImageData = context.getImageData(left, top, width, height);
context.putImageData(myImageData, dx, dy);
(with appropriate measurements)
https://developer.mozilla.org/En/HTML/Canvas/Pixel_manipulation_with_canvas/
Edit II: The idea was that you'd have a canvas big enough that has a center area of interest, and buffer areas around it big enough to account for any of the shapes you may draw, like so:
XXX
XCX
XXX
You could draw the shapes once to this big canvas and then just blindly draw each of the areas X around that center area to the center area (and then clear those areas out for the next drawing). So, if K is the number of shapes instead of 4*K draws, you have K + 8 draws (and then 8 clears). Obviously the practical applicability of this depends on the number of shapes and overlapping concerns, although I bet it could be tweaked. Depending upon the complexity of your shapes it may make sense to draw a shape 4 times as you originally thought, or to draw to some buffer or buffer area and then draw it's pixel data 4 times or something. I'll admit, this is some idea that just popped into my head so I might be missing something.
Edit III: And really, you could be smart about it. If you know how a set of objects are going to overlap, you should only have to draw from the buffer once. Say you got a bunch of shapes in a row that only draw to the north overlapping region. All you should need to do is draw those shapes, and then draw the north overlapping region to the south side. The hairy regions would be the corners, but I don't think they really get hairy unless the shapes are large.... sigh.. at this point I probably need to quiet down and see if there's any existing implementations of what I speak out there because I'm not sure my writing off-the-cuff is helping anybody.
I've been working on a WebGL project that runs on top of the Three.js library. I am rendering several semi-transparent meshes, and I notice that depending on the angle you tilt the camera, a different object is on top.
To illustrate the problem, I made a quick demo using three semi-transparent cubes. When you rotate the image past perpendicular to the screen, the second half of the smallest cube "jumps" and is no longer visible. However, shouldn't it still be visible? I tried adjusting some of the blending equations, but that didn't seem to make a difference.
What I'm wondering is whether or not this is a bug in WebGL/Three, or something I can fix. Any insight would be much appreciated :)
Well, that's something they weren't able to solve when they invented all this hardware accelerated graphics business and sounds like we'll have to deal with this for a long while.
The issue here is that graphic cards do not sort the polygons, nor objects. The graphics card is "dumb", you tell it to draw an object and it will draw the pixels that represent it and also, in another non-visible "image" called zbuffer (or depthbuffer), will draw the pixels that represent the object but instead of color it will draw the distance to the camera for each pixels. Any other objects that you draw afterwards, the graphics card will check if the distance to the camera for each pixel, and if it's farther, it won't draw it (unless you disable the check, that is).
This speeds up things a lot and gives you nice intersections between solid objects. But it doesn't play well with transparency. Say that you have 2 transparent objects and you want A to be drawn behind B. You'll need to tell the graphics card to draw A first and then B. This works fine as long as they're not intersecting. In order to draw 2 transparent objects intersecting then the graphics would have to sort all the polygons, and as the graphics card doesn't do that, then you'll have to do it.
It's one of these things that you need to understand and specifically tweak for your case.
In three.js, if you set material.transparent = true we'll sort that object so it's drawn before (earlier) other objects that are in front. But we can't really help you if you want to intersect them.
Can anybody help me with three.js?
I need to draw background, something, like a THREE.Sprite, but it neet to be UNDER any 3d object, that will draw later. I have a camera, that can be move only on Z axis.
I tryed to use:
cube mapping shader - PROBLEM: artefacts with shadow planes, it's unstable draw
THREE.Sprite that dublicate camera moving - PROBLEM: artefacts with shadow plane - it have a edge highlighting OR drawing only other spirtes without objects.
HTML DOM Background - PROBLEM: big and ugly aliasing in models.
What can I try more? Thanks!
You could maybe try drawing in several passes, i.e. making a first render of the background scene to a buffer, and then a second one over the first "buffer". Maybe using the buffer as background (painting it in 2D with an orthographic projection, and disabling depth buffer writes in that pass).
I haven't tried it myself with three.js, but that's how I'd do that with "traditional" OpenGL.
If you want a "3d" background i.e. something that will follow the rotation of your camera, but not react to the movement (be infinitely far), then the only way to do it is with a cubemap.
The other solution is a environment dome - a fully 3d object.
If you want a static background, then you should be able todo just a html background, i'm not sure why this would fail and what 'aliasing in models' you are talking about.