So I am working on a project that uses multiple different cameras to renderer the scene to different canvases. Essentially I am doing this example:
http://threejs.org/examples/webgl_multiple_canvases_grid.html
The problem I have found with doing this is that the clipping plane for different scenes does weird things at the edges. With Big Objects its fine, as the example shows, but with smaller, some get clipped on the edge. I've made an example showing the problem with this below.
http://tinyurl.com/pjstjjd
I was wondering if there is anyway to fix this. A few different ways I was going to try and explore it are as follows:
try and overlap the reneders a bit so that the clipping plane is wider.
See if there is any way to turn off clipping
cry myself to sleep.
Is there something simple I'm missing, or will I have to dig in deeper.
Thank you very much in advance for your time!
Isaac
The problem is you're creating 4 App objects and in each one you create different random spheres. So your 4 views have different sets of spheres in different places. If you want the views to match you have to put the objects in the same places in each App.
I pasted this code at line 129 in your sample
var randomSeed_ = 0;
var RANDOM_RANGE_ = Math.pow(2, 32);
Math.random = function() {
return (randomSeed_ =
(134775813 * randomSeed_ + 1) %
RANDOM_RANGE_) / RANDOM_RANGE_;
};
Which is a random function that returns the same values for each App since randomSeed_ starts at 0 in each app.
It would help to know what you're ultimately trying to achieve. The Three.JS sample you linked to is intended to show how to spread rendering across multiple monitors on 4 different machines in a grid.
This one shows if the monitors are different sizes and not in a grid. This one shows if the monitors are in a circle or semi circle. For example Google's Liquid Galaxy.
This one shows multiple views in a single canvas although at the time of writing this answer it looks like it needs some updating.
This one shows drawing using one large canvas and place holder elements for where to draw
Related
I am working on a system to procedurally build meshes for "mines", right now I don't want to achieve visual perfection I am more focused on the basic.
I got the point in which I am able to generate the shape of the mines and from that generating the 2 meshes, one for the ground and one for the "walls" of the mine.
Now I am working on getting the UV mapping right but my problem is that the ground is really hard to map to UV coordinates properly and I am currently not able to get it right.
For the tessellation I am using a constrained version of the delaunay triangulation to which I added a sub-tessellation what simply splits the triangles at least once and keeps splitting them if the area of the triangle is greater than X.
Here a 2D rendering of the tessellation that highlight the contours, the triangles and the edges
Here the result of the 3D rendering (using three.js and webgl) with my current UV mapping applied (a displacement map as well, please ignore it for now).
I am taking a naive approach to the UV mapping, each vertex of a triangle in the grid is translated to values between 0 and 1 and that's it.
I think that, in theory should be right, but the issue is with the order of the vertexes that is creating a problem but if that would be the case the texture should be shown rotated or oddly not just oddly AND stretched like that.
Once I will get the UV mapping right, the next step would be to correctly implement the
I am currently writing this in javascript but any hint or solution in any language would be alright, I don't mind converting and/or re-engineering it to make it work.
My goal is to be able to procedurally build the mesh, send it to multiple clients and achieve the same visual rendering. I need to add quite a few bits and pieces after this other step is implemented so I can't rely on shaders on the client side because otherwise being able to place tracks, carts or something else on the ground would just be impossible for the server.
Once I will get these things sorted out, I will switch to Unity 3D for the rendering on the client side, webgl and three.js are currently being used just to have a quick and easy way to view what's being produced without the need of a client/server whole infrastructure.
Any suggestion?
Thanks!
I sorted out the issue in my code, it was pretty stupid though: by mistake I was adding 3 UV mappings per triangle and not 1 per point causing an huge visual mess. Sorted out that, I was able to achieve what I needed!
https://www.youtube.com/watch?v=DHF4YWYG7FM
Still a lot of work to do but starts to look decent!
So I found out that texturing planets can be really hard. I created a 4096k image and wrapped it around a high poly sphere. Apart from the possible memory management performance issue that comes with a 3-4 mb image, the texture looks bad / pixelated on a close up (orbital) view.
I was thinking that I could maybe increase the resolution significantly by splitting up the picture. Then create a low, medium and high version of each section. If the camera viewport is very close to that particular section then render the high resolution image. If far away remove image from memory and apply low or medium version.
To be honest I am not sure what strategy to use to render high quality planets. Should I maybe avoid textures and just use height maps and color the planet with Javascript? Same thing for the clouds. Should I create a sphere with an alpha map or should I use shaders?
As you can see this is the issue im having and hopefully you could enlighten me. Performance with Webgl / three.js has significantly improved over time but since this is all done within the browser I assume thinking about the right solution is vital in the long term.
You're going to need to implement a lod system. lod = "level of detail" and in 3d it means generally means switching from high-polygon to low-polygon models but in general it means doing anything to switch high detail to low-detail
Because you can't make textures 1000000x100000 which is pretty much what you'd need to do to get the results you want you'll need build a sphere out of multiple sections and texture each one separately. How many sections depends on how close you want to be able to zoom in. Google Maps has millions of sections. At the same time, if you can zoom out to see the whole planet (like you can in Google Maps) you can't draw millions of sections. Instead you'd switch to a single sphere. That process is called "LODing"
There is no "generic" solution. You generally need to write your own for your specific case.
In the case of something like Google Maps what they most likely do is have several levels of detail. A single sphere when you can see the whole planet. A sphere made of say 100 pieces when slightly closer. A sphere made of 1000 pieces when closer, A sphere made of 10000 pieces when closer, etc etc. They also only show the pieces you can see. Deciding and managing which pieces to show with a generic solution would be way to slow (look at millions of pieces every frame) but you, as the application writer know what pieces are visible so you can make sure only those pieces are in your scene.
Another thing that people often do is fade between LODs. So when Google Maps is showing the single mesh sphere when all the say zoomed out and they transition to the 100 piece or 1000 piece sphere they crossfade between the two.
Some examples of lodding
http://acko.net/blog/making-worlds-1-of-spheres-and-cubes/
http://vterrain.org/LOD/spherical.html
You could create a sphere with different topology.
Say you create 6 square planes, arranged in such a way that they form a box. You can tesselate these planes to give the sphere enough resolution. These planes would have UV mapping that would work similar to cube-mapping, each will hold a cubemap face.
Then you loop through all the vertices, take the position vector and normalize it. This will yield a sphere.
You can convert an equirectangular panorama image into a cubemap. I think it will allow you to to get more resolution and less stretching for cheap.
For starters, the 4096 x 4096 should be 4096x2048 on the default sphere with equirectangular, but the newly mapped sphere can hold 6 x 4096 x 4096 with no stretching, and can be drawn in 6 draw calls.
Further splitting these could yield a good basis for what gman suggests.
I have two network graphs but placing both of them next to each other is the easiest way to compare if the graphs are small. But as the graph grows, it is making hard for the user to compare the views. I wanted to know the best way to merge two graphs and show the comparison.
In the above picture it can be seen that no of nodes are same but the way they are linked is different.
I would like to know how to present the compared data.
Any ideas about different views to present such comparison using d3.js.
I would suggest not trying to apply force layout or similar method for drawing graphs (that would draw the graph in a fashion similar to the on in the picture in your question). Instead, I wold like to suggest using circular layout for both graphs (similar to chord diagram):
This visual example is made for other purposes, but similar principles could be applied to your problem:
Layout all vertexes on a circle, in equidistant style (if there are some vertices belonging only to one of two graphs, they can be grouped and marked different color)
If there is a link between two vertices in both graphs, connect them in one color (lets say green)
If there is a link between two vertices in one graph only, connect them in appropriate color, dependant on a graph (lets say red and purple)
This method scales well with number of vertices.
Hope this helps.
Following methods for network comparison could be useful in the current scenario:
NetConfer, a web application
Nature Scientific Reports, an article guiding comparisons
CompNet, a GUI tool
So I have an animation that I'm coding in javascript and HTML5 (no libraries, no plugins, no nothing and I'd like it to stay that way). The animation uses physics (basically a bunch of unusual springs attached to masses) to simulate a simple liquid. The output from this part of the program is a grid (2d-array) of objects, each with a z value. This works quite nicely. My problem arises when drawing the data to an HTML5 Canvas.
That's what it looks like. Trust me, it's better when animated.
For each data point, the program draws one circle with a color determined by the z value. Just drawing these points, however, the grid pattern is painfully obvious and it is difficult to see the fluid that it represents. To solve this, I made the circles larger and more transparent so that they overlapped each other and the colors blended, creating a simple convolution blur. The result was both fast and beautiful, but for one small flaw:
As the circles are drawn in order, their color values don't stack equally, and so later-drawn circles obscure the earlier-drawn ones. Mathematically, the renderer is taking repeated weighted averages of the color-values of the circles. This works fine for two circles, giving each a value of 0.5*alpha_n, but for three circles, the renderer takes the average of the newest circle with the average of the other two, giving the newest circle a value of 0.5*alpha_n, but the earlier circles each a value of 0.25*alpha_n. As more circles overlap, the process continues, creating a bias toward newer circles and against older ones. What I want, instead, is for each of three or more circles to get a value of 0.33*alpha_n, so that earlier circles are not obscured.
Here's an image of alpha-blending in action. Notice that the later blue circle obscures earlier drawn red and green ones:
Here's what the problem looks like in action. Notice the different appearance of the left side of the lump.
To solve this problem, I've tried various methods:
Using different canvas "blend-modes". "Multiply" (as seen in the above image) did the trick, but created unfortunate color distortions.
Lumping together drawing calls. Instead of making each circle a separate canvas path, I tried lumping them all together into one. Unfortunately, this is incompatible with having separate fill colors and, what's more, the path did not blend with itself at all, creating a matte, monotone silhouette.
Interlacing drawing-order. Instead of drawing the circles in 0 to n order, I tried drawing first the evens and then the odds. This only partially solved the problem, and created an unsightly layering pattern in which the odds appeared to float above the evens.
Building my own blend mode using putImageData. I tried creating a manual pixel-shader to suit my needs using javascript, but, as expected, it was far too slow.
At this point, I'm a bit stuck. I'm looking for creative ways of solving or circumnavigating this problem, and I welcome your ideas. I'm not very interested in being told that it's impossible, because I can figure that out for myself. How would you elegantly draw a fluid from such data-points?
If you can decompose your circles into two groups (evens and odds), such that there is no overlap among circles within a group, the following sequence should give the desired effect:
Clear the background
Draw the evens with an alpha of 1.0 (opaque)
Draw the odds with an alpha of 1.0 (opaque)
Draw the evens with an alpha of 0.5
Places which are covered by neither evens nor odds will show the background. Those which are covered only by evens will show the evens at 100% opacity. Those covered by odds will show the odds with 100% opacity. Those covered by both will show a 50% blend.
There are other approaches one can use to try to blend three or more sets of objects, but doing it "precisely" is complicated. An alternative approach if one has three or more images that should be blended uniformly according to their alpha channel is to repeatedly draw all of the images while the global alpha decays from 1 to 0 (note that the aforementioned procedure actually does that, but it's numerically precise when there are only two images). Numerical rounding issues limit the precision of this technique, but even doing two or three passes may substantially reduce the severity of ordering-caused visual artifacts while using fewer steps than would be required for precise blending.
Incidentally, if the pattern of blending is fixed, it may be possible to speed up rendering enormously by drawing the evens and odds on separate canvases not as circles, but as opaque rectangles, and subtracting from the alpha channel of one of the canvases the contents of a a fixed "cookie-cutter" canvas or fill pattern. If one properly computes the contents of cookie-cutter canvases, this approach may be used for more than two sets of canvases. Determining the contents of the cookie-cutter canvases may be somewhat slow, but it only needs to be done once.
Well, thanks for all the help, guys. :) But, I understand, it was a weird question and hard to answer.
I'm putting this here in part so that it will provide a resource to future viewers. I'm still quite interested in other possible solutions, so I hope others will post answers if they have any ideas.
Anyway, I figured out a solution on my own: Before drawing the circles, I did a comb sort on them to put them in order by z-value, then drew them in reverse. The result was that the highest-valued objects (which should be closer to the viewer) were drawn last, and so were not obscured by other circles. The result is that the obscuring effect is still there, but it now happens in a way that makes sense with the geometry. Here is what the simulation looks like with this correction, notice that it is now symmetrical:
I need a Time Line For My Web Project.
Something like this - I read the code of this Time Line but did not understand it because it is not documented enough.
My problem is the math behind all of this (not the interaction with the canvas).
I have read several articles about the math of the scroll bars, but none of them talk about zoom.
Some
articles suggest to hold canvas element with very large width value - and to display just the
View Port.
I don't think that's the right way to do it - I want to draw just the correct viewport.
In my project, I have array of n points.
Each point holds time value represented in seconds, but not all of the points are within the Viewp Port.
Considering the current zoom level, how do I calculate:
What points should be drawn and where to draw them?
What is the size and position of the thumb?
Any articles / tutorials about such a thing?
You might be able to use something like Flot which handles the placement of points, as well as zooming and panning. Here's an example of that.
There are a bunch of other drawing libraries, here a good list.
You always have Raphealjs.com , one of the most used library to play with SVG, with this you can write your own js to generate the timeline.