I am loading stl files into a three.js scene using the STL loader. These stl files range from 5mb to 50mb.
Is there a method I can use to progressively load/stream/increase level of detail (not sure of the correct terminology) as the model loads so that my users aren't staring at a blank screen for minutes before anything appears?
If the model is 20,000 triangles, does three.js have a method for maybe loading only 2,000 first and then progressing to the fully detailed model?
Real progressive loading / mesh streaming is not there out of the box. It would be great though and is doable.
It has been done with WebGL without three.js using POP buffers: http://x3dom.org/pop/ & https://github.com/thibauts/pop-buffer
One of the demos is: http://x3dom.org/pop/happy.html
I hope one day we'll have nice POP buffer (or similar alternative) progressive loading for three.js too -- is in my/our todo / wish list too but many things still before that here .. sure don't mind if someone writes it earlier :)
Three does include a simple LOD mechanism but it is to help with rendering load and not with loading time: http://threejs.org/docs/#Reference/Objects/LOD & http://threejs.org/examples/webgl_lod.html
For a simple solution, you could author low & high versions of your models and write custom loading logic to just first load the low one & show them, then proceed to load the high version etc.
We've done that in one project, works fine as expected. Only downside I can figure is that it increases the total time to get the high version. But the low can be very small so it's ok (in our case low poly untextured with just vert colours, then the high ones have much more polys but essentially quite big textures too).
The best solution for this is currently Nexus, which has been under intensive development in the last couple of years. It has a loader for threejs.
Nexus and popbuffer could be the right solutions for prograssive LoD loading. However if you just want to try a simple (just working) solution, suggest to have a look into this:
https://github.com/amd/rest3d
On basis of GLTF format, it transmits single mesh one by one. In a certain sense it reduces the waiting time for the first renderring. Actually if you rearrange mesh data and use streaming buffer to download data, you might achieve a result of prograsive loading/rendering.
Related
I am building a relatively simple Three.js Application.
Outline: You move with your camera along a path through a "world". The world is all made up from simple Sprites with SpriteMaterials with transparent textures on it. The textures are basically GIF images with alpha transparency.
The whole application works fine and also the performance is quite good. I reduced the camera depth as low as possible so objects are only rendered quite close.
My problem is: i have many different "objects" (all sprites) with different many textures. I reuse the Textures/Materials reference for the same type of elements that are used multiple times in the scene (like trees and rocks).
Still, i'm getting to the point where memory usage is going up too much (above 2GB) due to all the textures used.
Now, when moving through the world, not all objects are visible/displayed from the beginning, even though i add all the sprites to the scene from the very start. Checking the console, the objects not visible and its textures are only loaded when "moving" further into the world when new elements actually are visible in the frustum. The, also the memory usage goes gradually up and up.
I cannot really us "object pooling" for building the world due to its "layout" lets say.
To test, i added a function that removes objects from the scene and disposes their material.map as soon as the camera passed by. Sth like
this.env_sprites[i].material.map.dispose();
this.env_sprites[i].material.dispose();
this.env_sprites[i].geometry.dispose();
this.scene.remove(this.env_sprites[i]);
this.env_sprites.splice(i,1);
This works for the garbage collection and frees up memory again. My problem is then, when moving backwards with the camera, the Sprites would need to be readded to the scene and the materials/texture loaded again, which is quite heavy for performance and does not seem the right approach to me.
Is there a known technique on how to deal with such a setup in regards to memory management and "removing" and adding objects and textures again (in the same place)?
I hope i could explain the issue well enough.
This is how the "World" looks like to give you an impression:
Each sprite individually wastes a ton of memory as your sprite imagery will probably rarely be square and because of that there will be wasted space in each sprite, plus the mipmaps which are also needed for each sprite take a lot of space. Also, on lower end devices you might hit a limit on the total number of textures you can use -- but that may or may not be a problem for you.
In any case, the best way to limit the GPU memory usage with so many distinct sprites is to use a texture atlas. That means you have at most perhaps a handful of textures, each texture contains many of your sprites and you use distinct UV coordinates within the textures for each sprite. Even then you may still end up wasting memory over time due to defragmentation in the allocation and deallocation of sprites, but you would be running out of memory far less quickly. If you use texture atlases you might even be able to load all sprites at the same time without having to deallocate them.
If you want to try and tackle the problem programmatically, there's a library I wrote to manage sprite texture atlases dynamically. I use it to render text myself, but you can use any canvas functions to create your images. It currently uses a basic Knapsack algorithm (which is replaceable) to manage allocation of sprites across the textures. In my case it means I need only 2 1024x1024 textures rather than 130 or so individual sprite textures of wildly varying sizes and that really saves a lot of GPU memory.
Note that for offline use there are probably better tools out there that can generate a texture atlas and generate UV coordinates for each sprite, though it should be possible to use node.js together with my library to create the textures and UV data offline which can then be used in a three.js scene online. That would be an exercise left to the reader though.
Something thing you can also try is to pool all your most common sprites in your "main" texture atlases which are always loaded; load and unload the less commonly used ones on the fly.
I am trying to build a network graph (like a network for brain) to display millions of nodes. I would like to know to what extent I can push the d3 js to in terms of adding more network nodes on one graph?
Like for example, http://linkedjazz.org/network/ and http://fatiherikli.github.io/programming-language-network/#foundation:Cappuccino
I am not that familiar with d3.js (though I am a JS dev), I just want to know if d3.js is the right tool to build a massive network visualization (one million nodes +) before I start looking at some other tools.
My requirements are simply: build a interactive web based network visualization that can scale
Doing a little searching myself, I found the following D3 Performance Test.
Be Careful, I locked up a few of my browser tabs trying to push this to the limit.
Some further searching led me to a possible solution where you can pre-render the d3.js charts server side, but I'm not sure this will help depending on your level of interaction desired.
That can be found here.
"Scaling" is not really an abstract question, it's all about how much you want to do and what kind of hardware you have available. You've defined one variable: "millions of nodes". So, the next question is what kind of hardware will this run on? If the answer is "anything that hits my website", the answer is "no, it will not scale". Not with d3 and probably not with anything. Low cost smartphones will not handle millions of nodes. If the answer is "high end workstations" the answer is "maybe".
The only way to know for sure is to take the lowest-end hardware profile you plan to support and test it. Can you guarantee users have access to a 64GB 16 core workstation? An 8GB 2 core laptop? Whatever it is, load up a page with whatever the maximum number of nodes is and sketch in something to simulate the demands of the type of interaction you want and see if it works.
How much d3 scales is very dependent on how you go about using it.
If you use d3 to render lots of svg elements, browsers will start to have performance issues in the upper thousands of elements. You can render up to about 100k elements before the browser crashes, but at that point user interaction is basically useless.
It is possible, however, to render lots and lots of lines or circles with a canvas. In canvas, everything is rendered in a single image file. Rather than creating a new element for each node or line, you draw a line in the image file for it. The downside of this is that animation is a bit more difficult, since you can't move elements in a canvas, only draw on top of a canvas or redraw the whole thing. This isn't impossible, but would be computationally expensive with a million nodes.
Since canvas doesn't have nodes, if you want to use the enter/exit/update paradigm with it, you have to put placeholder elements in the DOM. Here's a good example of how to do that: DOM-to-canvas with D3.
Since the memory costs of canvas don't scale with the number of nodes, it makes for a very scalable solution for large visualizations, but workarounds are required to get it to be interactive.
I am working on a 3D environment in threejs and am running into some lagging/performance issues because I am loading too many images in my scene. I am loading about 300 500x500 .pngs (I know thats a lot) and rendering them all at once. Naturally it would seem that I would get much higher performance if didn't render all of the images in the scene at once, but rather, only when my character/camera were within a certain distance to an object that requires an image to be used in its material. My question is what method would be best to use?
Load all of the images and map them to materials that I save in an array. Then programmatically apply the correct materials to objects that I am close to from that array and apply default (inexpensive) materials to objects that I am far away from.
Use a relatively small maximum camera depth in association with fog to prevent the renderer from having to render all of the images at once.
My main goal is to create a system that allows for all of the images to be viewed when the character is near them while also being as inexpensive on the user's computer as possible.
Perhaps there is a better method then the two that I have suggested?
Thanks!
I'm making a little game with HTML5 and MooTools and I have performance problems on firefox. I have implemented a counter to determine how often my update method gets called and it returns 64 times per second. The result seems much much slower (like 30 FPS). I think my problem is actually described on this article http://blog.sethladd.com/2011/03/measuring-html5-browser-fps-or-youre.html. I couldn't find out a way to solve this directly, but I think I can optimize the perforance.
I think one big problem in my logic is that I draw every single object on the canvas directly. I have done some games in Java before and I had great performance improvements with manipulating an image (drawing in the memory) and drawing just the final image. This way the browser would have much less requests to draw something and perhaps draw faster.
Is there a way to do this? I have found some libraries for image manipulation in JavaScript, but I would like to do it myself.
I can't show you the full code, because the project is for school and because it is way too big (~1500 lines of code).
http://www.html5rocks.com/en/tutorials/canvas/performance/
Maybe this will help. It shows you how to improve performance by using an offscreen canvas to render your scene.
Does anyone know why Javascript performance would be affected by the loading of lots of external JPG/PNG images into HTML5 Image() objects, totalling approx 200Mb-250Mb. Performance also seems to be affected by cache. Ie. if the cache is full(-ish) from previous browsing the performance on the current site is greatly reduced.
There are 2 says i can crudely solve it.
clear cache manually.
minimise browser, wait about 20 secs and re-open the browser after which time the iOS/browser has reclaimed the memory and runs the JS as it should.
I would have expected the iOS to reclaim required memory to run the current task, but it seems not. Another workaround is to load 200Mb of 'cache clearing' images into Image() objects, then remove these by setting the src = "". This does seem to help, but its not an elegant solution...
please help?
First and foremost read the excellent post on LinkedIn Engineering blog. Read it carefully and check if there are some optimizations that you can also try in your application. If you tried all of them and that still haven't solved your performance issues read on.
I assume that you have some image gallery or magazine-style content area on your page
How about having this image area in a separate iframe? What you could do then is this:
Have two iframes. Only one should be visible and active in time.
Load images into first iframe. Track the size of loaded images. If exact size tracking is hard
numberOfLoadedImages * averageImageSize
might be a pretty good aproximation.
As that number approaches some thresshold start preloading the currently visible content into second iframe.
Flip the visibility of iframes so the second one becomes active.
Clear the inner content of the first frame.
Repeat the whole procedure as necessary.
I don't know for sure if this would work for you but I hope that WebKit engine on iPad clears the memory of frames independently.
EDIT: It turned out you're writing a game.
If it's a game I assume that you want to have many game objects on the screen at the same time and you won't be able to simply unload some parts of them. Here are some suggestions for that case:
Don't use DOM for games: it's too memory-heavy. Fortunately, you're using canvas already.
Sprite your images. Image sprites not only help reducing the number of requests. They also let you reduce the number of Image objects and keep the per-file overhead lower. Read about using sprites for canvas animations on IE blog.
Optimize your images. There are several file size optimizers for images. SmushIt is one of them. Try it for your images. Pay attention to other techniques discussed in this great series by Stoyan Stefanov at YUI blog.
Try vector graphics. SVG is awesome and canvg can draw it on top of canvas.
Try simplifying your game world. Maybe some background objects don't need to be that detailed. Or maybe you can get away with fewer sprites for them. Or you can use image filters and masks for different objects of the same group. Like Dave Newton said iPad is a very constrained device and chances are you can get away with a relatively low-quality sprites.
These were all suggestions related to reduction of data you have to load. Some other suggestions that might work for you.
Preload images that you will need and unload images that you no longer need. If your game has "levels" or "missions" load sprites needed only for current one.
Try loading "popular" images first and download the remaining once in background. You can use separate <iframe> for that so your main game loop won't be interrupted by downloads. You can also use cross-frame messaging in order to coordinate your downloader frame.
You can store the very most popular images in localStorage, Application Cache and WebSQL. They can provide you with 5 mb of storage each. That's 15 megs of persistent cache for you. Note that you can use typed arrays for localStorage and WebSQL. Also keep in mind that Application Cache is quite hard to work with.
Try to package your game as a PhoneGap application. This way you can save your users from downloading a huge amount of data before playing the game. 200 megs as a single download just to open a page is way too much. Most people won't even bother to wait for that.
Other than that your initial suggestion to override cache with your images is actually valid. Just don't do it straight away. Explore the possibilities to reduce the download size for your game instead.
I managed to reduce the impact by setting all the images that aren't currently in the viewport to display:none. This was with background images though and I haven't tested over 100Mb of images, so can't say whether this truly helps. But definitely worth of trying.