WebGL Marquee/Lasso Selection Tool - javascript

So I have a WebGL model viewer (one model made of lots of different objects), in which I have implemented a lasso/marquee selection box. That is, the user can click to drag a rectangle on the screen, and everything inside the rectangle gets selected. It selects not just visible objects, but everything in the model that is within the bounds of the rectangle.
I have to do this for 2 cases: partial and full selection.
Full Selection: only objects that are completely inside the rectangle will get selected.
Partial Selection: any object that has any overlap in the lasso rectangle, will get selected.
Now, I have full selection working just fine. The problem is with partial selection. Everything I've done is too slow. I don't have a spatial index/partition (octree or BVH, etc), so my next test is to create one to help more quickly cull out things that won't get selected.
In general, though, it seems at some point, I'm stuck with iterating through all the triangles of an object to see if any part of it overlaps with the lasso rectangle on screen. In some cases, this will be very slow if the lasso selection is dragged to be large enough, such that no objects get culled out, and all must be checked for intersection with the rectangle.
Right now, I'm doing all rectangle/triangle intersection tests in JS (various ways of checking for intersection of an object with the rectangle), and no test is happening in a shader.
So I'm not really looking for help to speed up what I currently have, I am wanting to know if there are known ways of doing this that are relatively fast even for large numbers of highly detailed objects (meshes)? I seemed to hit a wall, creatively, in thinking of a shader based approach.

Related

three.js How to hack OutlinePass to outline all selected objects?

I would like to modify the OutlinePass to outline all of the selected objects in the scene, including those contained within the bounds of the other in screen space (I also hope I just used that term correctly).
I am using three.js OutlinePass to indicate objects currently selected in the scene. With ray picking I append to the selected objects array, then update outlinePass.selectedObjects with said array.
The objects I select are PlaneBufferGeometry with MeshBasicMaterial. Each next drawn plane has increasing renderOrder as well as, just slightly, larger z axis offset (which in my case points upwards), so that you can correctly pick them.
If I select two disjoint planes, the outline works correctly.
If I select two intersecting planes, the outline works okay - it only draws around the two intersecting shapes -- this effect is actually nice, but would collide with fixing the next point anyway.
If I select two planes, where one is contained within the other (contained in terms of view, looking from where the camera is), then only the outer shape is outlined. Yes, that's probably a feature of OutlinePass, not a bug.
Current outline behaviour matches what it's designed to do (linked easy to understand list of steps it does).
I've spent at least 1-2 hours following the OutlinePass source, but I'm not familiar with the subject of vertex shaders or depth masks and while I would like to learn about them in the future, I can't do that right now. I believe the OutlinePass could be modified to achieve what I need.
The OutlinePass currently overrides the scene material to prepare for the edge detection pass. I'm hoping by modifying that behavior (so different objects have different materials, hence can be detected by edge detection pass) could either be modified by changing one of the used shaders or depth parameters to distinguish different objects to outline (and not only their "encompassing shapes", so to speak).
JS fiddle here, look for the UNCOMMENT line at the bottom of JS to see the issue I described. Once you uncomment that line, I would like both of these planes to be outlined.
I'm aware there are other ways to show object outlines (like enlarging an object copy behind it), but I'm interested in using the OutlinePass :). Thank you!

The FabricJS selection box is shifted when the canvas is transformed

I am using FabricJS and I am trying to have two canvases next to each other which behave like one. So I can drag objects between them.
My idea was to shift the viewport of the second canvas about the size of the first and then add the same objects to both of the canvases.
Here you can see what I did so far:
https://jsfiddle.net/ytdzo38h/
The selection box for the object is now shifted and therefore the object can't be grabbed in both canvases.
I found out that canvas.wrapperEl.getBoundingClientRect()gives back the coordinates of the selection box, but I have no idea how I could adjust that to fit my needs. I would really appreciate your help :)

Three.js TextGeometry Wordwrap - drag as group

I am working on a panorama site and want to be able to add a text label that can be dragged to the right location. It is a spherical pano for what it is worth :) The next 4 paragraphs are good info in general, especially if you are stuck like I was, but the real question starts below the +++
I originally tried using the regular canvas trick, but the canvas was too large vertically. For instance if I wanted to say "Hello World" that might be 120px wide and 15px tall, but the "transparent" canvas would be 150px tall, which would overlap other text objects, but not the panorama.
The other issue was if I wanted it to be more than a single line, so for instance if I wanted to say "Did you see this cool stream?" that could be broken up into 2-3 lines. After playing with it I got wordwrapping to work on both the canvas and the sprites, but ultimately the canvas problem killed it for me.
After that I tried Sprites, which was better about not overlapping visually the other text objects, but the canvas size was still too large which created issues in the onMouseOver intersects where if it was too close together you were likely to start dragging the wrong one around, or if you just wanted to pan / tilt the camera around to look at the panorama you may inadvertently click on the oversized canvas.
I tried multiple times to get the TextGeometry working, but always got this error "THREE.TextGeometry: font parameter is not an instance of THREE.Font." till I finally found out the API changed in the latest version which made all the examples on the web useless. After I found out about the API change, I was able to get it working, here is a fiddle for anyone that needs it: https://jsfiddle.net/287rumst/1/
++++++++++
In my app, this previous example using TextGeometry works out just fine, but only when it is a single line. When I loop through and do the next line, it becomes an extra object that you drag around individually, which becomes a problem when I want to be able to click on it to edit the text, size, color and so on.
I am not sure what to do about the issue of dragging all lines as a group. I expected it to work when I added the group to the objects array, which is how I did it before with the lines, but it doesn't work, so I was thinking it would be great if it could be wrapped in a transparent box with the faces pressed against the side and then use the transparent box as the handle for dragging.
Here is the example of it working with wordwrap: https://jsfiddle.net/ajhalls/73dhu192/
Outstanding issues would be dragging the group, and causing it to face the camera from the center of the group, rather than the corner when being dragged. group.geometry.center() I would have thought would work, like it does when it is a single line, but again it has issues.

HTML5 Canvas Creative Alpha-Blending

So I have an animation that I'm coding in javascript and HTML5 (no libraries, no plugins, no nothing and I'd like it to stay that way). The animation uses physics (basically a bunch of unusual springs attached to masses) to simulate a simple liquid. The output from this part of the program is a grid (2d-array) of objects, each with a z value. This works quite nicely. My problem arises when drawing the data to an HTML5 Canvas.
That's what it looks like. Trust me, it's better when animated.
For each data point, the program draws one circle with a color determined by the z value. Just drawing these points, however, the grid pattern is painfully obvious and it is difficult to see the fluid that it represents. To solve this, I made the circles larger and more transparent so that they overlapped each other and the colors blended, creating a simple convolution blur. The result was both fast and beautiful, but for one small flaw:
As the circles are drawn in order, their color values don't stack equally, and so later-drawn circles obscure the earlier-drawn ones. Mathematically, the renderer is taking repeated weighted averages of the color-values of the circles. This works fine for two circles, giving each a value of 0.5*alpha_n, but for three circles, the renderer takes the average of the newest circle with the average of the other two, giving the newest circle a value of 0.5*alpha_n, but the earlier circles each a value of 0.25*alpha_n. As more circles overlap, the process continues, creating a bias toward newer circles and against older ones. What I want, instead, is for each of three or more circles to get a value of 0.33*alpha_n, so that earlier circles are not obscured.
Here's an image of alpha-blending in action. Notice that the later blue circle obscures earlier drawn red and green ones:
Here's what the problem looks like in action. Notice the different appearance of the left side of the lump.
To solve this problem, I've tried various methods:
Using different canvas "blend-modes". "Multiply" (as seen in the above image) did the trick, but created unfortunate color distortions.
Lumping together drawing calls. Instead of making each circle a separate canvas path, I tried lumping them all together into one. Unfortunately, this is incompatible with having separate fill colors and, what's more, the path did not blend with itself at all, creating a matte, monotone silhouette.
Interlacing drawing-order. Instead of drawing the circles in 0 to n order, I tried drawing first the evens and then the odds. This only partially solved the problem, and created an unsightly layering pattern in which the odds appeared to float above the evens.
Building my own blend mode using putImageData. I tried creating a manual pixel-shader to suit my needs using javascript, but, as expected, it was far too slow.
At this point, I'm a bit stuck. I'm looking for creative ways of solving or circumnavigating this problem, and I welcome your ideas. I'm not very interested in being told that it's impossible, because I can figure that out for myself. How would you elegantly draw a fluid from such data-points?
If you can decompose your circles into two groups (evens and odds), such that there is no overlap among circles within a group, the following sequence should give the desired effect:
Clear the background
Draw the evens with an alpha of 1.0 (opaque)
Draw the odds with an alpha of 1.0 (opaque)
Draw the evens with an alpha of 0.5
Places which are covered by neither evens nor odds will show the background. Those which are covered only by evens will show the evens at 100% opacity. Those covered by odds will show the odds with 100% opacity. Those covered by both will show a 50% blend.
There are other approaches one can use to try to blend three or more sets of objects, but doing it "precisely" is complicated. An alternative approach if one has three or more images that should be blended uniformly according to their alpha channel is to repeatedly draw all of the images while the global alpha decays from 1 to 0 (note that the aforementioned procedure actually does that, but it's numerically precise when there are only two images). Numerical rounding issues limit the precision of this technique, but even doing two or three passes may substantially reduce the severity of ordering-caused visual artifacts while using fewer steps than would be required for precise blending.
Incidentally, if the pattern of blending is fixed, it may be possible to speed up rendering enormously by drawing the evens and odds on separate canvases not as circles, but as opaque rectangles, and subtracting from the alpha channel of one of the canvases the contents of a a fixed "cookie-cutter" canvas or fill pattern. If one properly computes the contents of cookie-cutter canvases, this approach may be used for more than two sets of canvases. Determining the contents of the cookie-cutter canvases may be somewhat slow, but it only needs to be done once.
Well, thanks for all the help, guys. :) But, I understand, it was a weird question and hard to answer.
I'm putting this here in part so that it will provide a resource to future viewers. I'm still quite interested in other possible solutions, so I hope others will post answers if they have any ideas.
Anyway, I figured out a solution on my own: Before drawing the circles, I did a comb sort on them to put them in order by z-value, then drew them in reverse. The result was that the highest-valued objects (which should be closer to the viewer) were drawn last, and so were not obscured by other circles. The result is that the obscuring effect is still there, but it now happens in a way that makes sense with the geometry. Here is what the simulation looks like with this correction, notice that it is now symmetrical:

Google Map Algorithm (Ajax, Tiles, etc)

I'm working on an app that displays a large image just about the same way as Google Maps. As the user drags the map, more images are loaded so that when a new part of the map is visible, the corresponding images are already in place.
By the way, this is a Javascript project.
I'm thinking of representing each tile as a square div with the image loaded as a background image.
My question: how exactly can I calculate what divs are showing, and when the tiles are moved, how do I tell when a new row of divs have become visible?
Thanks!
About calculating what divs are showing: learn the algorithm for intersecting two rectangles (the stackoverflow question Algorithm to detect intersection of two rectangles? is a good starting point). With that, the divs that are showing are the ones whose intersection with the "view window" is non-empty.
About telling when a new row of divs have become visible: you will probably need a updateInterface() method anyway. Use this method to keep track of the divs showing, and when divs that weren't showing before enter the view window, fire a event handler of sorts.
About implementation: you should probably have the view window be itself a div with overflow: hidden and position: relative. Having a relative position attribute in CSS means that a child with absolute position top 0, left 0 will be at the top-left edge of the container (the view area, in your case).
About efficiency: depending on how fast your "determine which divs are showing" algorithm ends up being, you can try handling the intersection detection only when the user stops dragging, not on the mouse move. You should also preload the areas immediately around your current view window, so that if the user doesn't drag too far away, they will already be loaded.
Some further reference:
Tile5: Tiling Interfaces
gTile: Javascript tile based game engine
Experiments in rendering a Tiled Map in javascript/html…
There's no reason to implement this yourself, really, unless it's just a fun project. There are several open source libraries that handle online mapping.
To answer your question, you need to have an orthophoto-type image (an image aligned with the coordinate space) and then a mapping from pixel coordinates (i.e. the screen) to world coordinates. If it's not map images, just arbitrary large images then, again, you need to create a mapping between the pixel coordinates of the source image at various zoom levels and the view-port's coordinates.
If you read Google Map's SDK documentation you will see explanations of these terms. It's also a good idea to explore one of the aforementioned existing libraries, read its documentation and see how it's done.
But, again, if this is real work, don't implement it yourself. There's no reason to.

Categories