What are maximum dimensions of Three.js render canvas? - javascript

When I set render canvas size to ~4000x4000px it's ok, however starting from around 4000 (tested 5k, 6k, 8k) it seems that scene is "cut off" in some way. See image below:
So up to ~4k content is properly rendered in the center of canvas and for higher canvas sizes it cuts the content.
Is it a Three.js/WebGL limitation?
If yes then what are actual maximum canvas dimensions that don't cause any "deformations"?
Is there anything I can do about it? I need to get canvas snapshot in high resolution (8k).

Is there anything I can do about it? I need to get canvas snapshot in high resolution (8k).
You can render portions of the scene using Camera.setViewOffset and then assemble the parts into a large image. You can look at this sample where it renders the image in 4 separate parts. In that example it makes 4 renderers but for your case you'd only need one and just render->capture canvas->render->capture canvas-> repeat until you get all parts. Then you need a way to stitch those parts together. You could use gIMP or Photoshop for example.
Here's library that does this
https://greggman.github.io/dekapng/
I know there is an answer here I wrote that shows how to do this in three.js since I wrote that library because of my answer. Unfortunately 10 minutes of searching didn't bring up >:(

This is not a limitation of three.js but a limitation of the maximum size of the drawing buffer. It's defined by the browser vendor and can't be exceeded by an application. More information in this github issue:
https://github.com/mrdoob/three.js/issues/5194

Related

How can I improve performance for multiple images drawn in webgl?

I am programming a simple webgl application, which draws multiple images (textures) over each other. Depending on the scroll position, the scale and opacity of the images is changed, to create a 3D multi-layer parallax effect. You can see the effect here:
http://gsstest.gluecksschmiede.io/ct2/
I am currently working on improving the performance, because the effect does not perform well on older devices (low fps). I lack the in-depth knowledge in webgl (and webgl debugging) to see what is the reason for the bad performance, so I need help. This question only is concerned with desktop devices.
I've tried / I am currently:
always working with the same program and shader pair
the images are in 2000x1067 and already compressed. I need png because of the transparency. I could compress them a little more, but not much. The resolution has to be that way.
already using requestAnimationFrame and non-blocking scroll listeners
The webgl functions I am using to draw the image can be read in this file:
http://gsstest.gluecksschmiede.io/ct2/js/setup.js
Shader code can be found here (just right click -> show sourcecode):
http://gsstest.gluecksschmiede.io/ct2/
Basically, I've used this tutorial/code and did just a few changes:
https://webglfundamentals.org/webgl/lessons/webgl-2d-drawimage.html
I'm then using this setup code to draw the images depending on current scroll position as seen in this file (see the "update" method):
http://gsstest.gluecksschmiede.io/ct2/js/para.js
In my application, about 15 images of 2000x1067 size are drawn on to each other for every frame. I expected this to perform way better than it actually is. I don't know what is causing the bottleneck.
How you can help me:
Provide hints or ideas what code / image compression / whatever changes could improve rendering performance
Provide help on how to debug the performance. Is there a more clever why then just printing out times with console.log and performance.now?
Provide ideas on how I could gracefully degrade or provide a fallback that performance better on older devices.
This is just a guess but ...
drawing 15 fullscreen images is going to be slow on many systems. It's just too many pixels. It's not the size of the images it's the size they are drawn. Like on my MacBook Air the resolution of the screen is 2560x1600
You're drawing 15 images. Those images are drawn into a canvas. That canvas is then drawn into the browser's window and the browser's window is then drawn on the desktop. So that's at least 17 draws or
2560 * 1600 * 17 = 70meg pixels
To get a smooth framerate we generally want to run at 60 frames a second. 60 frames a second means
60 frames a second * 70 meg pixels = 4.2gig pixels a second.
My GPU is rated for 8gig pixels a second so it looks like we might get 60fps here
Let's compare to a 2015 Macbook Air with a Intel HD Graphics 6000. Its screen resolution is 1440x900 which if we calculate things out comes to 1.3gig pixels at 60 frames a second. It's GPU is rated for 1.2gig pixels a second so we're not going to hit 60fps on a 2015 Macbook Air
Note that like everything, the specified max fillrate for a GPU is one of those theoretical max things, you'll probably never see it hit the top rate because of other overheads. In other words, if you look up the fillrate of a GPU multiply by 85% or something (just a guess) to get the fillrate you're more likely to see in reality.
You can test this easily, just make the browser window smaller. If you make the browser window 1/4 the size of the screen and it runs smooth then your issue was fillrate (assuming you are resizing the canvas's drawing buffer to match its display size). This is because once you do that less pixels are being draw (75% less) but all the other work stays the same (all the javascript, webgl, etc)
Assuming that shows your issue is fillrate then things you can do
Don't draw all 15 layers.
If some layers fade out to 100% transparent then don't draw those layers. If you can design the site so that only 4 to 7 layers are ever visible at once you'll go a long way to staying under your fillrate limit
Don't draw transparent areas
You said 15 layers but it appears some of those layers are mostly transparent. You could break those apart into say 9+ pieces (like a picture frame) and not draw the middle piece. Whether it's 9 pieces or 50 pieces it's probably better than 80% of the pixels being 100% transparent.
Many game engines if you give them an image they'll auto generate a mesh that only uses the parts of the texture that are > 0% opaque. For example I made this frame in photoshop
Then loading it into unity you can see Unity made a mesh that covers only the non 100% transparent parts
This is something you'd do offline either by writing a tool or doing it by hand or using some 3D mesh editor like blender to generate meshes that fit your images so you're not wasting time trying to render pixels that are 100% transparent.
Try discarding transparent pixels
This you'd have to test. In your fragment shader you can put something like
if (color.a <= alphaThreshold) {
discard; // don't draw this pixel
}
Where alphaThreashold is 0.0 or greater. Whether this saves time might depend on the GPU since using discarding is slower than not. The reason is if you don't use discard then the GPU can do certain checks early. In your case though I think it might be a win. Note that option #2 above, using a mesh for each plane that only covers the non-transparent parts is by far better than this option.
Pass more textures to a single shader
This one is overly complicated but you could make a drawMultiImages function that takes multiple textures and multiple texture matrices and draws N textures at once. They'd all have the same destination rectangle but by adjusting the source rectangle for each texture you'd get the same effect.
N would probably be 8 or less since there's a limit on the number of textures you can in one draw call depending on the GPU. 8 is the minimum limit IIRC meaning some GPUs will support more than 8 but if you want things to run everywhere you need to handle the minimum case.
GPUs like most processors can read faster than they can write so reading multiple textures and mixing them in the shader would be faster than doing each texture individually.
Finally it's not clear why you're using WebGL for this example.
Option 4 would be fastest but I wouldn't recommend it. Seems like too much work to me for such a simple effect. Still, I just want to point out that at least at a glance you could just use N <div>s and set their css transform and opacity and get the same effect. You'd still have the same issues, 15 full screen layers is too many and you should hide <div>s who's opacity is 0% (the browser might do that for you but best not to assume). You could also use the 2D canvas API and you should see similar perf. Of course if you're doing some kind of special effect (didn't look at the code) then feel free to use WebGL, just at a glance it wasn't clear.
A couple of things:
Performance profiling shows that your problem isn't webgl but memory leaking.
Image compression is worthless in webgl as webgl doesn't care about png, jpg or webp. Textures are always just arrays of bytes on the gpu, so each of your layers is 2000*1067*4 bytes = 8.536 megabytes.
Never construct data in your animation loop, you do, learn how to use the math library you're using.

Render scene at lower resolution (three.js)

How can i decrease my rendered canvas resolution(just like i can do on blender camera resolution)? I've seen another question saying that i need to use
renderer.setDevicePixelRatio(window.devicePixelRatio)
The problem is that the object either gets black or nothing shows on canvas, so i don't really know if this is it solves my problem.
I also tried to use
renderer.setSize()
and
renderer.setViewport()
together and separately, but it only changed the canvas size to a very small one(i need a high size preview of the canvas), and even though the viewport got on the size i wanted, it seems that the objects are rendered only on the smaller size, so i can't see all of it, so it doesn't do the trick.
Also, if possible, is there a way to do that by manually changing the image buffer to a lower resolution one, and displaying it?
The thing i needed was the setPixelRatio function on therenderer. no setDevicePixelRatio.

How much does image size affect CSS rendering / painting performance?

I'm working on JS / HTML game, where all objects are HTML elements (mostly <img>). Their top and left CSS attributes (sometimes along with others, like margins, transformations etc) are dynamically modified by JS code (every frame, basically). I noticed a huge performance improvement, when I switched from using .png files to .gif (22 fps -> 35 fps), but still:
Can further reduction of files' size (by 10-30%) actually noticeably improve CSS transformation performance? I would just test it, but I'm talking about ~250 gif files; and I don't want to loose too much quality, too.
Reduction of the the amount of data(=file size) which will have to be processed, will normally have a positive performance impact, but how much impact it will have has always to be tested and measured, as it depends always on how well your code works together with all surrounding frameworks, the browser and even how the underlying hardware does calculations.
In a worst case where you have to create a lot of resizing an recalulate new pixels, especially if you have to increase the display size of your image again, you might even create a loss in performance.
There is no absolute answer to how good or bad your change will be, until you have tested and measured it in your system yourself.
About Reducing GIF File Size:
A positive factor here is when you cut the width and height of the image by half, the actual file size can be reduced by around 75%.
When using GIF images, reducing the number of different colors used in the GIF image will reduce the size as well. Often you can select the number of colors when exporting gifs from image editor tools. In Photoshop you have an option "Export for Web" which will try to set the optimal settings for usage in web pages.
Regarding Game Performance and Images:
If you already know how you will resize the images(fixed sizes like +50% and +100%), using CSS Image Sprites with pre-resized images can create a good improvement, as you won't have to resize the image, but just use an existing part of the sprite sheet. This will reduce the amount of processing for image manipulation, and therefore increase the performance.
If you want all at once, performance, quality and small file size, the best way to go is replacing as much raster graphics with vector graphics as possible. Most of modern 2D Games use a lot of vector graphics.
Vector graphics are easier to resize, as only the splines have to be recalculated and filled, resizing raster graphics causes calculation for each new pixel.
All common modern browsers support the svg format and can be sized through css. If you want try it out, a free open source tool to create vector graphics is InkScape.
Hope this helps.
Why don't you use JPEG? It's compact. Yes, Image sizes affect performance.
Also, check "CSS Image Sprites", You can have a single image with all your possible icons/images and can handle view using CSS. You'll get better performance as you'll be loading a single image.

Change DPI of canvas element in HTML5

I'm looking to create a chrome application that allows the user to design a few different things using the canvas element. After they're done, I'd like to use a mask to crop out everything outside a specified area and then send the image to a printer. However, I need to be able to control the printed size precisely.
When I was making a similar app in python, it was a simple matter of setting the DPI to a ratio of the resolution to the dimensions I needed. I'm trying to find if there's a solution as simple as this one for HTML5.
I've found a couple posts asking for similar things, but they're unanswered and don't have a lot of activity.
Any help would be appreciated!
Canvas is a bitmap (raster) format. You can not just increase the DPI without some horrible artifacts appearing.
You can however record all the drawing, strokes/fills etc, and then at print time create a larger canvas for print and scale up all the drawing commands to fit the higher resolution canvas. As long as you don't use bitmapped images or direct pixel manipulation the results will be good. Though large canvas formats can be problematic on some devices and browsers. A better way is to convert it all to a vector format like SVG and print that. Or skip the canvas altogether and draw to SVG.

HTML5 Canvas fillRect slow

I'm trying to write a simple raycaster using an HTML5 canvas and I'm getting abysmal framerates. Firefox's profiler reports 80% of my runtime is spent in context2d.fillRect(), which I'm using per column for floors and ceilings and per pixel on the textured walls. I came across this question, and found that fillRect was faster than 1x1 px pictures by 40% on chrome and 4% on firefox. They mention how it's still calculating alpha stuff, although I'd assume if the alpha is 1 it'd have a pass through for opaque rendering? Are there any other methods for doing a lot of rectangle and pixel blitting with javascript I should try out?
A solution I have found to this is to put the fillRect() call inside a path each time you call it:
canvasContext.beginPath();
canvasContext.rect(1, 1, 10, 10);
canvasContext.fill();
canvasContext.closePath();
It seems that the call to rect() adds to the path of the previous rect() call, if used incorrectly this can lead to a memory leak or a buildup of resource usage.
There are two solutions that you could try in order to reduce the CPU usage of your renderings.
Try using requestAnimationFrame in order for your browser to render your canvas graphics whenever it is ready and especially only when the user has the canvas tab opened in the browser. More info: https://developer.mozilla.org/en-US/docs/Web/API/window.requestAnimationFrame
Second solution, depending on whether part or all of your content is dynamic, is to pre-render portions of your canvas on other "hidden" canvas in memory with JavaScript (for instance you split your main canvas in 4 sub canvas then only have to draw 4 elements on screen).
PS: If you're on a mac using Firefox combined with multiple canvas renderings will boost your CPU usage compared to Chrome
Hope this helps

Categories