I am trying to write a layout extension and have already looked at the examples provided both from the existing extensions (e.g. arbor, cola, cose-bilkent, etc.) and the scaffolding here. The place where I am hung up is with the webGL renderer. In all of the examples, this is handled by the core (for canvas), if I am not mistaken. Is it possible to use a webGL renderer through three.js? If so, is it as simple as just creating/attaching the required webGL elements in the extension (e.g. scene, cameras, light-sources, etc.)?
The reason for the webGL push is that I want to implement a 3D adjacency matrix (I cannot remember where I found the paper, but someone had implemented this in a desktop application with subject, predicate, and object being the X,Y, and Z axes) and don't see another way to do that efficiently for large result sets on the order of 10-25K nodes/edges.
Cytoscape supports multiple renderers, but it does not support 3D co-ordinates. Positions are defined as (x, y).
You could add a 3D renderer if you like, but you'd have to use a data property for the z position, because the position object doesn't support z.
It's a lot of work to write a renderer, especially one that's performant and fully-featured. If you were to write a 2D renderer, you could reuse all the existing hittests, gestures/interaction, events, etc -- so you could focus on drawing tech. To write a 3D renderer, you'll have to do all the rendering logic from scratch.
If your data requires 3D (e.g. representing 3D atomic bonds or 3D protein structures), then writing a 3D renderer may be a good idea. If it's just to have a neat 3D effect, it's probably not worth it -- as 3D is much harder for users to navigate and understand.
Related
I'm working on a WebGL application that works similarly to video compositing programs like After Effects.
The application takes textures and stacks them like layers. This can be as simple as drawing one texture on top of the other or using common blend modes like the screen to combine several layers.
Each layer can be individually scaled/rotated/translated via an API call, and altogether the application forms a basic compositing software.
Now my question, doing all this in a single WebGL canvas is a lot to keep track of.
// Start webgl rant
The application won't know how many layers there are ahead of time, and so textures coordinate planes and shaders will need to be made dynamically.
Blending the layers together would require writing out the math for each type of blend mode. Transforming vertex coordinates requires matrix math and just doing things in WebGL, in general, requires lots of code being a low-level API.
// End webgl rant
However, this could be easily solved by making a new canvas element for each layer. Each WebGL canvas will have a texture drawn onto it, and then I can scale/move/blend the layers using simple CSS code.
At the surface, it seems like the performance hit won't be that bad, because even if I did combine everything into a single context each layer would still need its own texture and coordinate system. So the number of textures and coordinates stays the same, just spread across multiple contexts.
However deep inside I know somehow this is horribly wrong, and my computer's going to catch fire if I even try. I just can't figure out why.
With a goal of being able to support, ~4 layers at a time would using multiple canvases be a valid option? Besides worrying about browsers having a max number of active WebGL context's are there any other limitations to be aware of?
I designed an ad hoc library to write SVG images. I use it for the graphical visualisation of large sparse matrices : homology matrices of the comparison of two genomes.
In my case, a usual homology matrix is 20 000 x 20 000 with around 30 000 scattered dots (representing homologies), for instance this homology matrix.
My ad hoc library is not mature enough for new developpements and I am interested by the 3D.js library. I also found the graphical library Cairo and its binding pyCairo interesting.
My needs are :
the possibility to output SVG, PNG or PDF images of the matrix.
when the mouse get over a dot of the homology matrix I would like to show specific informations.
a plus would be the ability to zoom in an area of the matrix, and show new details above a zoom threshold.
I know that 3D.js has dynamic features (interesting for navigating/zooming through the sparse matrix) but I wonder if it will display fast enough my data. It might be a better idea to use Cairo in an application and recompute the image for different zoom levels.
In brief I feel that I have 2 solutions :
write a HTML file, with a 3D.js script that adds SVG elements of the homology matrix and interactivity. 3D.js would be great for sharing matrices from a server.
write a dedicated app (e.g. with GTK) and draw the matrix inside the app with Cairo. Interactions would then be added by asking Cairo to re-draw each time the user interacts with the app.
Do I miss something ? Which solution would you use ?
I have previously done a project of building a 3D environment that allow user to draw a shape and then extrude it into a 3D geometry (e.g. a circle into a cylinder) using a javascript library Three.js with a function called ExtrudeGeometry().
currently I am building a similar program but on mobile platform using Xamarin and monogame/Xna. Being able to extrude a shape into 3D geometry does make many thing much more easier, however, so far I haven't found a similar function that provides similar functionality. Is there a counterpart of ExtrudeGeometry in Three.js in monogame/Xna? or an alternative way to accomplish same result.
I have a webgl application, I've written using threejs. But the FPS is not good enough on some of my test machines. I've tried to profile my application using Chrome's about:tracing with the help from this article : http://www.html5rocks.com/en/tutorials/games/abouttracing/
It appears that the gpu is being overloaded. I also found out that my FPS falls drastically when I have my entire scene in the camera's view. The scene contains about 17 meshes and a single directional light source. Its not really a heavy scene. I've seen much heavier scenes get render flawlessly on the same GPU.
So, what changes can I make in the scene to make it less heavy, without completely changing it? I've already tried removing the textures? But that doesn't seem to fix the problem.
Is there a way to figure out what computation threejs is pushing on to the GPU? Or would this be breaking the basic abstraction threejs gives?
What are general tips for profiling GPU webgl-threejs apps?
There are various things to try.
Are you draw bound?
Change your canvas to 1x1 pixel big. Does your framerate go way up? If so you're drawing too many pixels or your fragment shaders are too complex.
To see if simplifying your fragment shader would help use a simpler shader. I don't know three.js that well. Maybe the Basic Material?
Do you have shadows? Turn them off. Does it go faster? Can you use simpler shadows? For example the shadows in this sample are fake. They are just planes with a circle texture.
Are you using any postprocessing effects? Post processing effects are expensive, specially on mobile GPUs.
Are you drawing lots of opaque stuff? If so can you sort your drawing order so you draw front to back (close to far). Not sure if three.js has an option to do this or not. I know it can sort transparent stuff back to front so it should be simple to reverse the test. This will make rendering go quicker assuming you're drawing with the depth test on because pixels in the back will be rejected by the DEPTH_TEST and so won't have the fragment shader run for them.
Another thing you can do to save bandwidth is draw to a smaller canvas and have it be stretched using CSS to cover the area you want it to appear. Lots of games do this.
Are you geometry bound?
You say you're only drawing 17 meshes but how big are those meshes? 17 12 triangle cubes or 17 one million triangle meshes?
If you're geometry bound can use simplify? If the geometry goes far into the distance can you split it and use lods? see lod sample.
I'm trying to build a camera calibration function for a three.js app and could very much do with some help.
I have a geometry in a 3d scene, lets say for this example it is a cube. As this object and scene exist in 3d, all properties of the cube are known.
I also have a physical copy of the geometry.
What I would like to is take a photo of the physical geometry and mark 4+ points on the image in x and y. These points correspond to 4+ points in the 3d geometry.
Using these point correlations, I would like to be able to work out the orientation of the camera to the geometry in the photo and then match a virtual camera to the 3d geometry in the three.js scene.
I looked into the possibility of using an AR lib such as JS-aruco or JSARToolkit. But these systems need a marker, where as my system need to be marker less. The user will choose the 4 (or more) points on the image.
I've been doing some research and identified that Tsai's algorithm for camera alignment should suit my needs.
While I have a good knowledge of javascript and three.js, My linear algebra isn't the best and hence am having a problem translating the algorithm into javascript.
If anyone could give me some pointers, or is able to explain the process of Tsai's algorithm in a javascript mannor I would be so super ultra thankful.