I would like to modify the OutlinePass to outline all of the selected objects in the scene, including those contained within the bounds of the other in screen space (I also hope I just used that term correctly).
I am using three.js OutlinePass to indicate objects currently selected in the scene. With ray picking I append to the selected objects array, then update outlinePass.selectedObjects with said array.
The objects I select are PlaneBufferGeometry with MeshBasicMaterial. Each next drawn plane has increasing renderOrder as well as, just slightly, larger z axis offset (which in my case points upwards), so that you can correctly pick them.
If I select two disjoint planes, the outline works correctly.
If I select two intersecting planes, the outline works okay - it only draws around the two intersecting shapes -- this effect is actually nice, but would collide with fixing the next point anyway.
If I select two planes, where one is contained within the other (contained in terms of view, looking from where the camera is), then only the outer shape is outlined. Yes, that's probably a feature of OutlinePass, not a bug.
Current outline behaviour matches what it's designed to do (linked easy to understand list of steps it does).
I've spent at least 1-2 hours following the OutlinePass source, but I'm not familiar with the subject of vertex shaders or depth masks and while I would like to learn about them in the future, I can't do that right now. I believe the OutlinePass could be modified to achieve what I need.
The OutlinePass currently overrides the scene material to prepare for the edge detection pass. I'm hoping by modifying that behavior (so different objects have different materials, hence can be detected by edge detection pass) could either be modified by changing one of the used shaders or depth parameters to distinguish different objects to outline (and not only their "encompassing shapes", so to speak).
JS fiddle here, look for the UNCOMMENT line at the bottom of JS to see the issue I described. Once you uncomment that line, I would like both of these planes to be outlined.
I'm aware there are other ways to show object outlines (like enlarging an object copy behind it), but I'm interested in using the OutlinePass :). Thank you!
Related
I am working on a system to procedurally build meshes for "mines", right now I don't want to achieve visual perfection I am more focused on the basic.
I got the point in which I am able to generate the shape of the mines and from that generating the 2 meshes, one for the ground and one for the "walls" of the mine.
Now I am working on getting the UV mapping right but my problem is that the ground is really hard to map to UV coordinates properly and I am currently not able to get it right.
For the tessellation I am using a constrained version of the delaunay triangulation to which I added a sub-tessellation what simply splits the triangles at least once and keeps splitting them if the area of the triangle is greater than X.
Here a 2D rendering of the tessellation that highlight the contours, the triangles and the edges
Here the result of the 3D rendering (using three.js and webgl) with my current UV mapping applied (a displacement map as well, please ignore it for now).
I am taking a naive approach to the UV mapping, each vertex of a triangle in the grid is translated to values between 0 and 1 and that's it.
I think that, in theory should be right, but the issue is with the order of the vertexes that is creating a problem but if that would be the case the texture should be shown rotated or oddly not just oddly AND stretched like that.
Once I will get the UV mapping right, the next step would be to correctly implement the
I am currently writing this in javascript but any hint or solution in any language would be alright, I don't mind converting and/or re-engineering it to make it work.
My goal is to be able to procedurally build the mesh, send it to multiple clients and achieve the same visual rendering. I need to add quite a few bits and pieces after this other step is implemented so I can't rely on shaders on the client side because otherwise being able to place tracks, carts or something else on the ground would just be impossible for the server.
Once I will get these things sorted out, I will switch to Unity 3D for the rendering on the client side, webgl and three.js are currently being used just to have a quick and easy way to view what's being produced without the need of a client/server whole infrastructure.
Any suggestion?
Thanks!
I sorted out the issue in my code, it was pretty stupid though: by mistake I was adding 3 UV mappings per triangle and not 1 per point causing an huge visual mess. Sorted out that, I was able to achieve what I needed!
https://www.youtube.com/watch?v=DHF4YWYG7FM
Still a lot of work to do but starts to look decent!
I need to draw thousands of points & lines witch have position, size and color attributes and they're position is dynamic (interactive in dragging).
I was using buffer Geometry until now but now I had found two more things
instancing
interleaved buffer
I want to know what are these and how they work? What are advantages and disadvantages of them? Are they better for my case or simple buffer Geometry is best for me?
Can you give me a full comparison between these three?
Interleaving means that instead of creating multiple VBO to contain your data, you create one, and mix your data. Instead of having one buffer with v1,v1,v1,v2,v2,v2... and another with c1,c1,c1,c2,c2,c2...., you have one with v1,v1,v1,c1,c1,c1,v2,v2,v2,c2,c2,c2... with different pointers.
I'm not sure what the upside of this and am hoping that someone with more experience can answer this better. I'm not sure what happens if you want to mix types, say less precision for texture coordinates. Not sure if this would even be good practice.
On the downside, if you have to loop over this and update positions for example, but not the colors, that loop may be slightly more complicated then if it was just lined up.
Instancing is when you use one attribute across many geometry instances.
One type would be, say a cube, v1,v1,v1,v2,v2,v2....v24,24,24, 24 vertices describing a cube with sharp edges in one attribute. You can have another one with 24 normals, and another one with indecis. If you wanted to position this somewhere, you would use a uniform, and do some operation with it on the position attribute.
If you want to make 16683 cubes each with an individual position, you can issue a draw call with the same cube bound (attributes), but with the position uniform changed each time.
You can make another, instance attribute, pos1,pos1,pos1.....pos16683,pos16683,pos16683 with 16683 positions for that many instances of the cube. When you issue an instanced drawcall with these attributes bound, you can draw all 16683 instances of the cube within that one call. Instead of using a position uniform, you would have another attribute.
In case of your points this does not make sense since they are mapped 1:1 to the attribute. Meaning, you assign the position of one point, inside of that attribute and there is no more need to transform it with some kind of a uniform. With instancing, you can turn your point into something more complex, say a cube.
So I have a WebGL model viewer (one model made of lots of different objects), in which I have implemented a lasso/marquee selection box. That is, the user can click to drag a rectangle on the screen, and everything inside the rectangle gets selected. It selects not just visible objects, but everything in the model that is within the bounds of the rectangle.
I have to do this for 2 cases: partial and full selection.
Full Selection: only objects that are completely inside the rectangle will get selected.
Partial Selection: any object that has any overlap in the lasso rectangle, will get selected.
Now, I have full selection working just fine. The problem is with partial selection. Everything I've done is too slow. I don't have a spatial index/partition (octree or BVH, etc), so my next test is to create one to help more quickly cull out things that won't get selected.
In general, though, it seems at some point, I'm stuck with iterating through all the triangles of an object to see if any part of it overlaps with the lasso rectangle on screen. In some cases, this will be very slow if the lasso selection is dragged to be large enough, such that no objects get culled out, and all must be checked for intersection with the rectangle.
Right now, I'm doing all rectangle/triangle intersection tests in JS (various ways of checking for intersection of an object with the rectangle), and no test is happening in a shader.
So I'm not really looking for help to speed up what I currently have, I am wanting to know if there are known ways of doing this that are relatively fast even for large numbers of highly detailed objects (meshes)? I seemed to hit a wall, creatively, in thinking of a shader based approach.
So I have an animation that I'm coding in javascript and HTML5 (no libraries, no plugins, no nothing and I'd like it to stay that way). The animation uses physics (basically a bunch of unusual springs attached to masses) to simulate a simple liquid. The output from this part of the program is a grid (2d-array) of objects, each with a z value. This works quite nicely. My problem arises when drawing the data to an HTML5 Canvas.
That's what it looks like. Trust me, it's better when animated.
For each data point, the program draws one circle with a color determined by the z value. Just drawing these points, however, the grid pattern is painfully obvious and it is difficult to see the fluid that it represents. To solve this, I made the circles larger and more transparent so that they overlapped each other and the colors blended, creating a simple convolution blur. The result was both fast and beautiful, but for one small flaw:
As the circles are drawn in order, their color values don't stack equally, and so later-drawn circles obscure the earlier-drawn ones. Mathematically, the renderer is taking repeated weighted averages of the color-values of the circles. This works fine for two circles, giving each a value of 0.5*alpha_n, but for three circles, the renderer takes the average of the newest circle with the average of the other two, giving the newest circle a value of 0.5*alpha_n, but the earlier circles each a value of 0.25*alpha_n. As more circles overlap, the process continues, creating a bias toward newer circles and against older ones. What I want, instead, is for each of three or more circles to get a value of 0.33*alpha_n, so that earlier circles are not obscured.
Here's an image of alpha-blending in action. Notice that the later blue circle obscures earlier drawn red and green ones:
Here's what the problem looks like in action. Notice the different appearance of the left side of the lump.
To solve this problem, I've tried various methods:
Using different canvas "blend-modes". "Multiply" (as seen in the above image) did the trick, but created unfortunate color distortions.
Lumping together drawing calls. Instead of making each circle a separate canvas path, I tried lumping them all together into one. Unfortunately, this is incompatible with having separate fill colors and, what's more, the path did not blend with itself at all, creating a matte, monotone silhouette.
Interlacing drawing-order. Instead of drawing the circles in 0 to n order, I tried drawing first the evens and then the odds. This only partially solved the problem, and created an unsightly layering pattern in which the odds appeared to float above the evens.
Building my own blend mode using putImageData. I tried creating a manual pixel-shader to suit my needs using javascript, but, as expected, it was far too slow.
At this point, I'm a bit stuck. I'm looking for creative ways of solving or circumnavigating this problem, and I welcome your ideas. I'm not very interested in being told that it's impossible, because I can figure that out for myself. How would you elegantly draw a fluid from such data-points?
If you can decompose your circles into two groups (evens and odds), such that there is no overlap among circles within a group, the following sequence should give the desired effect:
Clear the background
Draw the evens with an alpha of 1.0 (opaque)
Draw the odds with an alpha of 1.0 (opaque)
Draw the evens with an alpha of 0.5
Places which are covered by neither evens nor odds will show the background. Those which are covered only by evens will show the evens at 100% opacity. Those covered by odds will show the odds with 100% opacity. Those covered by both will show a 50% blend.
There are other approaches one can use to try to blend three or more sets of objects, but doing it "precisely" is complicated. An alternative approach if one has three or more images that should be blended uniformly according to their alpha channel is to repeatedly draw all of the images while the global alpha decays from 1 to 0 (note that the aforementioned procedure actually does that, but it's numerically precise when there are only two images). Numerical rounding issues limit the precision of this technique, but even doing two or three passes may substantially reduce the severity of ordering-caused visual artifacts while using fewer steps than would be required for precise blending.
Incidentally, if the pattern of blending is fixed, it may be possible to speed up rendering enormously by drawing the evens and odds on separate canvases not as circles, but as opaque rectangles, and subtracting from the alpha channel of one of the canvases the contents of a a fixed "cookie-cutter" canvas or fill pattern. If one properly computes the contents of cookie-cutter canvases, this approach may be used for more than two sets of canvases. Determining the contents of the cookie-cutter canvases may be somewhat slow, but it only needs to be done once.
Well, thanks for all the help, guys. :) But, I understand, it was a weird question and hard to answer.
I'm putting this here in part so that it will provide a resource to future viewers. I'm still quite interested in other possible solutions, so I hope others will post answers if they have any ideas.
Anyway, I figured out a solution on my own: Before drawing the circles, I did a comb sort on them to put them in order by z-value, then drew them in reverse. The result was that the highest-valued objects (which should be closer to the viewer) were drawn last, and so were not obscured by other circles. The result is that the obscuring effect is still there, but it now happens in a way that makes sense with the geometry. Here is what the simulation looks like with this correction, notice that it is now symmetrical:
Apple's official documentation says:
WebKitCSSMatrix objects represent a 4x4 homogeneous matrix for 3D transforms or a vector for 2D transforms. You can use these objects to manipulate matrices in JavaScript. For example, you can multiply, translate, and scale matrices.
I'm a glorified designer, not an engineer, so I'm assuming that's the reason why I can't make any sense of that description. Please, can somebody point me in the right direction to understand how this matrix and/or vectors work?
Whew, this is the most difficult question I've attempted to answer. The short answer is that, as web designers, we don't have the vocabulary to express 3d transformations. In order to explain it to you in a comprehensible way I'd have to use math concepts which I don't understand myself.
If you'd like to investigate further you can take a look at:
http://www.eleqtriq.com/2010/05/css-3d-matrix-transformations/
But, I can explain it visually.
http://duopixel.com/stack/webkitmatrix/ (you'll have to view this under Safari 5 w/Snow Leopard, or an iPad, or course).
What you're seeing is just an interface to the 16 values webkitCSSMatrix, the sliders that seem to do nothing are related to the z axis, which I suspect would be visible if we had more objects in the 3d canvas.
Edit: after studying the link I placed before, I noticed the original author has done the same example before, doh! http://www.eleqtriq.com/wp-content/static/demos/2010/css3d/matrix3dexplorer.html
Even though it's for ActionScript, check out Understanding the Transformation Matrix in Flash 8. It's got pretty pictures, too :)
Before getting into how transformation matrices (matrices is plural of matrix) work, it is important to understand what a matrix is. A matrix is a rectangular array (or table) of numbers consisting of any number of rows and columns. A matrix consisting of m rows and n columns is known as an m x n matrix. This represents the matrix's dimensions. You'll commonly seen matrices with numbers in rows and columns surrounded by two large bracket symbols.
...
Affine transformations are transformations that preserve collinearity and relative distancing in a transformed coordinate space. This means points on a line will remain in a line after an affine transformation is applied to the coordinate space in which that line exists. It also means parallel lines remain parallel and that relative spacing or distancing, though it may scale, will always maintain at a consistent ratio. Affine transformations allow for repositioning, scaling, skewing and rotation. Things they cannot do include tapering or distorting with perspective. If you're ever worked with transforming symbols in Flash, you probably recognize these qualities.
(source: senocular.com)