(Three.js) Bloom effect performance - javascript

It has 50-60fps, if there is no bloom effect.
renderer.render( scene, camera );
Then, just have 10fps after added bloom effect.
camera.layers.set( BLOOM_SCENE );
bloomComposer.render();
camera.layers.set( ENTIRE_SCENE );
finalComposer.render();
How can i fix it? demo link is here
https://codepen.io/anson-chan/pen/QXeKqm?editors=1010

There are a few different problems here.
You have a LARGE number of cubes in your scene. Each cube is a unique material and drawCall. To keep a scene fast, you want to keep the number of drawcalls around 500 or so, and the number of polygons at or below around 1.5 million.
This is called being "draw call bound".
There are some strategies for dealing with this... such as using hardware instancing and/or a library like THREE.BAS to do instancing of your cubes. These techniques can render LARGE numbers of objects with a single draw call, and may help you.
(https://github.com/zadvorsky/three.bas)
(http://three-bas-examples.surge.sh/examples/instanced_prefabs/)
Next up:
your resize handler is always resizing to the bounds of the window.. not the size of the actual canvas. You are also NOT resizing the actual renderPasses in your code. Each pass that has a width/height parameter also needs to be resized, along with the effectsComposer itself, and presently always resizing to the size of the window.. not the size of the actual canvas area your are rendering.
Instead, try something like renderer.resize(renderer.domElement.width,renderer.domElement.height,false) Then control the canvas resizing via css rules, and then make sure your blurPass size is set, and the effectsComposer size is also set. The 'false' parameter on the resize method prevents THREE from setting the width/height of the canvas style, and allows you to control it explicitly. Otherwise, you can have problems of your size changing triggering additional resizes when THREE tries to change the CSS of the canvas to the new width/height you're passing.
This problem is causing you to render a full window sized canvas and effects pass no matter what the display size it.
The next issue is that blur/bloom is an inherently costly operation.. since it requires scaling down the entire framebuffer a few times to generate the blurred/bloomed version of the effect. This is compounded by the sizing problem mentioned above ^
Next up... I don't know what hardware you are running on, but if it's a retina display.. your devicePixelRatio will be greater than 1, in which case it further compounds the display sizing problems, since on a retina display, a devicePixelRatio of 2 will cause the renderer to render 4x as much as a devicePixelRatio of 1. You may want to consider using 1 instead of whatever you get from window.devicePixelRatio.
All of these issues compound each other. GPU's can only do so much, so you have to be clever how you manage them and what kind of loads you place on them.
Hope this helps.

Related

Strong blur effect shader in a single-pass?

I'm learning WebGL and I'm looking to create a strong blur effect that is efficient to produce. I've been looking around and it seems that the blur shaders I've found do a sort of flip-flop operation where they blur the image onto a frame buffer, then blur that frame buffer onto the original frame buffer, etc. and just keep blurring back and forth until the desired intensity is achieved.
This is pretty slow because you have to process the entire screen multiple times, and each pixel in each sampling requires grabbing many pixels around it, so it ends up being billions of texture lookups.
In my case, the blur doesn't need to be that accurate, but it needs to be slightly powerful. Specifically, I'm going to be blurring my light map, such that I would like to turn this into this:
into this:
The main reason for this is so that I can smooth out the shadow edges, as currently they're pretty hard. Does something like this exist? I do not need any dynamic blurring. That is, that radius for the blur should always be the same (around 15?), so I assume it should be possible to make something like this. Is this possible to do without flip-flopping and reprocessing the frame buffer multiple times?
Links for reference (the existing solutions I've found, both of which seem to flip-flop and reprocess the frame buffer multiple times):
https://webglfundamentals.org/webgl/lessons/webgl-image-processing-continued.html
https://rastergrid.com/blog/2010/09/efficient-gaussian-blur-with-linear-sampling/

How can I improve performance for multiple images drawn in webgl?

I am programming a simple webgl application, which draws multiple images (textures) over each other. Depending on the scroll position, the scale and opacity of the images is changed, to create a 3D multi-layer parallax effect. You can see the effect here:
http://gsstest.gluecksschmiede.io/ct2/
I am currently working on improving the performance, because the effect does not perform well on older devices (low fps). I lack the in-depth knowledge in webgl (and webgl debugging) to see what is the reason for the bad performance, so I need help. This question only is concerned with desktop devices.
I've tried / I am currently:
always working with the same program and shader pair
the images are in 2000x1067 and already compressed. I need png because of the transparency. I could compress them a little more, but not much. The resolution has to be that way.
already using requestAnimationFrame and non-blocking scroll listeners
The webgl functions I am using to draw the image can be read in this file:
http://gsstest.gluecksschmiede.io/ct2/js/setup.js
Shader code can be found here (just right click -> show sourcecode):
http://gsstest.gluecksschmiede.io/ct2/
Basically, I've used this tutorial/code and did just a few changes:
https://webglfundamentals.org/webgl/lessons/webgl-2d-drawimage.html
I'm then using this setup code to draw the images depending on current scroll position as seen in this file (see the "update" method):
http://gsstest.gluecksschmiede.io/ct2/js/para.js
In my application, about 15 images of 2000x1067 size are drawn on to each other for every frame. I expected this to perform way better than it actually is. I don't know what is causing the bottleneck.
How you can help me:
Provide hints or ideas what code / image compression / whatever changes could improve rendering performance
Provide help on how to debug the performance. Is there a more clever why then just printing out times with console.log and performance.now?
Provide ideas on how I could gracefully degrade or provide a fallback that performance better on older devices.
This is just a guess but ...
drawing 15 fullscreen images is going to be slow on many systems. It's just too many pixels. It's not the size of the images it's the size they are drawn. Like on my MacBook Air the resolution of the screen is 2560x1600
You're drawing 15 images. Those images are drawn into a canvas. That canvas is then drawn into the browser's window and the browser's window is then drawn on the desktop. So that's at least 17 draws or
2560 * 1600 * 17 = 70meg pixels
To get a smooth framerate we generally want to run at 60 frames a second. 60 frames a second means
60 frames a second * 70 meg pixels = 4.2gig pixels a second.
My GPU is rated for 8gig pixels a second so it looks like we might get 60fps here
Let's compare to a 2015 Macbook Air with a Intel HD Graphics 6000. Its screen resolution is 1440x900 which if we calculate things out comes to 1.3gig pixels at 60 frames a second. It's GPU is rated for 1.2gig pixels a second so we're not going to hit 60fps on a 2015 Macbook Air
Note that like everything, the specified max fillrate for a GPU is one of those theoretical max things, you'll probably never see it hit the top rate because of other overheads. In other words, if you look up the fillrate of a GPU multiply by 85% or something (just a guess) to get the fillrate you're more likely to see in reality.
You can test this easily, just make the browser window smaller. If you make the browser window 1/4 the size of the screen and it runs smooth then your issue was fillrate (assuming you are resizing the canvas's drawing buffer to match its display size). This is because once you do that less pixels are being draw (75% less) but all the other work stays the same (all the javascript, webgl, etc)
Assuming that shows your issue is fillrate then things you can do
Don't draw all 15 layers.
If some layers fade out to 100% transparent then don't draw those layers. If you can design the site so that only 4 to 7 layers are ever visible at once you'll go a long way to staying under your fillrate limit
Don't draw transparent areas
You said 15 layers but it appears some of those layers are mostly transparent. You could break those apart into say 9+ pieces (like a picture frame) and not draw the middle piece. Whether it's 9 pieces or 50 pieces it's probably better than 80% of the pixels being 100% transparent.
Many game engines if you give them an image they'll auto generate a mesh that only uses the parts of the texture that are > 0% opaque. For example I made this frame in photoshop
Then loading it into unity you can see Unity made a mesh that covers only the non 100% transparent parts
This is something you'd do offline either by writing a tool or doing it by hand or using some 3D mesh editor like blender to generate meshes that fit your images so you're not wasting time trying to render pixels that are 100% transparent.
Try discarding transparent pixels
This you'd have to test. In your fragment shader you can put something like
if (color.a <= alphaThreshold) {
discard; // don't draw this pixel
}
Where alphaThreashold is 0.0 or greater. Whether this saves time might depend on the GPU since using discarding is slower than not. The reason is if you don't use discard then the GPU can do certain checks early. In your case though I think it might be a win. Note that option #2 above, using a mesh for each plane that only covers the non-transparent parts is by far better than this option.
Pass more textures to a single shader
This one is overly complicated but you could make a drawMultiImages function that takes multiple textures and multiple texture matrices and draws N textures at once. They'd all have the same destination rectangle but by adjusting the source rectangle for each texture you'd get the same effect.
N would probably be 8 or less since there's a limit on the number of textures you can in one draw call depending on the GPU. 8 is the minimum limit IIRC meaning some GPUs will support more than 8 but if you want things to run everywhere you need to handle the minimum case.
GPUs like most processors can read faster than they can write so reading multiple textures and mixing them in the shader would be faster than doing each texture individually.
Finally it's not clear why you're using WebGL for this example.
Option 4 would be fastest but I wouldn't recommend it. Seems like too much work to me for such a simple effect. Still, I just want to point out that at least at a glance you could just use N <div>s and set their css transform and opacity and get the same effect. You'd still have the same issues, 15 full screen layers is too many and you should hide <div>s who's opacity is 0% (the browser might do that for you but best not to assume). You could also use the 2D canvas API and you should see similar perf. Of course if you're doing some kind of special effect (didn't look at the code) then feel free to use WebGL, just at a glance it wasn't clear.
A couple of things:
Performance profiling shows that your problem isn't webgl but memory leaking.
Image compression is worthless in webgl as webgl doesn't care about png, jpg or webp. Textures are always just arrays of bytes on the gpu, so each of your layers is 2000*1067*4 bytes = 8.536 megabytes.
Never construct data in your animation loop, you do, learn how to use the math library you're using.

Render scene at lower resolution (three.js)

How can i decrease my rendered canvas resolution(just like i can do on blender camera resolution)? I've seen another question saying that i need to use
renderer.setDevicePixelRatio(window.devicePixelRatio)
The problem is that the object either gets black or nothing shows on canvas, so i don't really know if this is it solves my problem.
I also tried to use
renderer.setSize()
and
renderer.setViewport()
together and separately, but it only changed the canvas size to a very small one(i need a high size preview of the canvas), and even though the viewport got on the size i wanted, it seems that the objects are rendered only on the smaller size, so i can't see all of it, so it doesn't do the trick.
Also, if possible, is there a way to do that by manually changing the image buffer to a lower resolution one, and displaying it?
The thing i needed was the setPixelRatio function on therenderer. no setDevicePixelRatio.

Do browsers (chrome/firefox/safari) cull non-visible svg shapes?

We currently have a screen reaching about 10000 shapes. We are allowing users to pan and zoom to explore. I thought of a couple optimizations to keep using svg in the mid-term:
culling shapes not on screen (only writing objects in our viewport to the DOM)
reducing edges when zoomed out
These 2 tactics go hand in hand; however, I was wondering if shapes not already on screen are already culled and not "drawn" by most browser vendors. If not, is it presumably better maintain a quad tree of objects in the scene and render the current set of trees that intersect with our viewport?
Yes, Firefox since version 17 has culled shapes that can't be seen. The code creates what is called a display list of things that it intends to draw. I imagine sure Chrome and IE use some similar mechanism so you'd only make things slower if you try to handle it yourself.

Pixel-by-pixel animation with javascript

I'm animating a sprite on a pixel grid. I have a few options, with pros and cons for each. I have a fair amount of javascript experience (six years), but none with this kind of thing. The problem is I don't know how expensive each option will be.
The sprite needs to render quite fast, and be inexpensive enough to have at least five running at the same time while running collision detection.
Ideally, I would like to use a grid of elements inside of a wrapper, rendering colour and alpha channels to each element from a multidimensional array.
The major pro here is that I can run pixel-by-pixel collision detection and click past the transparent parts of the sprite. With any image-based sprite, the onClick event will fire even if I click on a transparent pixel (I'll have to do a lot of work to pass clicks through transparent pixels, and it might be quite expensive).
The next option is to use css sprites. css-tricks.com/css-sprites/
This would be easy peasy, but as mentioned previously, onClicks won't pass through the transparent pixels. I can probably force it, but again, it may be expensive, and take a lot of time to impliment.
Another option is animated gifs, but they are huge, limited in the colour department, and hard to control animation-wise. I'd rather not go there.
And then there's the html5 canvas element, which I don't know very much about and would like to stay away from if at all possible. I don't know how any of my code would even work in relation to the canvas element and I doubt it would do what I want in the long-run.
So which is the best for performance? Would the first (and most preferable) be a viable option? Or have I missed something out?
With today's browsers you will be fine on desktop computers for building a sprite out of positioned pixel sub-elements (as long as they aren't too complicated or large), and just to be safe I'd limit yourself to about 10 active sprites. With Mobile things might get a bit slow and clunky, but considering you seem to be designing a game that requires precision "onclicks" I doubt that this will be a problem.
Your most flexible bet is to use HTML5 Canvas, as you have already worked out, but it will involve quite a bit more JavaScript coding. But this sytem will allow you to apply a number of effects to your sprites and will allow you to use pixel perfect detection by using getImageData (which allows you to read the exact pixel colour at any pixel offset).
getPixel from HTML Canvas?
If you wanted to avoid the techinical problems and challenges of having a full screen canvas system (which can be tricky), you can actually create as many smaller Canvas elements and move them around as your sprites (with the ease of HTML Elements).. Then all you have to do is design the code that draws your animation frames, and also tells if the mouse has hit or not hit the sprite using the aforementioned method (along with a click handler and some code to calculate where the user has clicked relative to your canvas elements position). Obviously, it would be best to do this in a generalised way so your code can be applied to all your sprites :)
To draw your images on the canvas you can use a spritesheet as you were mentioning in your question, and use the rather flexible drawImage() method which supports a slicing mode. This just needs to be tied up to a setInterval or requestAnimationFrame style game loop.
https://developer.mozilla.org/en-US/docs/Web/Guide/HTML/Canvas_tutorial/Using_images
http://www.playmycode.com/blog/2011/08/building-a-game-mainloop-in-javascript/
UPDATE - for those who wish to be very optimal
If you wish to take a more optimal route - which is a little bit more involved again - you can do the following. This method benefits if you have many sprites that are exactly the same with only a few (20 or 30) frames of animation:
Power your sprites by normal DIVs with a background image sprite sheet that you shift the background position of. This is the most optimal you can be, save having static images as sprites, because the browser does all the work.
for each sprite type draw your spritesheet on a hidden canvas element that is big enough to incorporate your whole spritesheet.
When a user clicks on one of your DIV sprites, take the background position as coordinates, invert them, and you should then know where on your canvas element (looked up by sprite's type) the pixel data resides.
Use the getPixelData method on your hidden canvas to work out if the user has clicked on the sprite or not.
The above means you only have one canvas element in use - per sprite type, the browser handles all the graphics for you and you get pixel perfect collisions with an onclick.
Hope the above makes sense?
How about splitting your image spirit into 30x30 cells and only have elements where the cell is opaque and leave a gap where the cell is transparent so that clicks fall through. You lose a bit of accuracy in where the cells can be clicked though.

Categories