p5.js sketch very slow on mobile devices - javascript

I'm building a creative simulation in p5.js by employing double pendula. Here is the link to the stripped-down version of the code (I'm not sure it can be made smaller):
https://github.com/amzon-ex/amzon-ex.github.io/tree/master/dp-sketch-stripped
Here is where you can see the animation: https://amzon-ex.github.io/dp-sketch-stripped/dp-sketch.html
When I run this on my laptop (Windows 10, MS Edge 88) I get about 55-60 fps, which is what I want. However, when I run the same sketch on a mobile device (Android, Chrome 88) the performance is very poor and I hardly get 5-6 fps. I do not understand what is complicated about my code that contributes to such a poor performance. I have ran other kinds of p5.js sketches before on mobile and they've worked well.
Any insight is appreciated.
Since I have a few files in the project link I provided, here's a quick guide (there is absolutely no need to read everything):
sketch.js is the key file, which builds the animation. It takes an image, builds an array filled with brightness values from the image (array lumamap, in setup()). Then I draw trails for my pendula where the brightness of the trail at any pixel corresponds to the brightness value in lumamap.
dp-sketch.html is the HTML container for the p5 sketch.
libs/classydp.js houses the DoublePendulum class which describes a double pendulum object.

As I've found out with some help, the issue is tied to varying pixel density on different devices. As mobile devices have higher pixel density compared to desktops/laptops, my 500x500 canvas becomes much bigger on those displays, hence leading to many more pixels to render. Coupled with the fact that mobile processors are weaker in comparison to desktop processors, the lag/low framerate is expected.
This can be avoided by setting pixelDensity(1) in setup(); which avoids rendering larger canvases on dense-pixel devices.

Related

Safari: big drop in <canvas> performance above certain fixed size

Background
While working on a complex full-screen 2D animation utilizing multiple <canvas> layers I found out a problem with Safari. The rendering performance drops sharply after the canvas gets bigger than exactly 3840x3840. I ran multiple tests (benchmark I wrote: https://codepen.io/kiler129/full/Exbgrqp) and obtained this peculiar graph:
(click for a readable full-size copy)
Data analysis
Steady 60 FPS/realtime is kept until 3840x3840, then it drops to unusable 2-25 FPS
There's a correlation between larger fill % (100% vs 20%) of the canvas and FPS, as well as larger number of draw calls per frame (d/f) and FPS -> nothing unusual here, it's pretty linear
Buffered canvases (drawing to off-screen & then copying) are way less performant for simple operations
Brave, and any other Chromium-based browsers, keep steady 60 FPS basically forever (even with screen-scaled 10000x10000 plane)
2018 iPad running Safari behaves identically to 2021 MBP with M1 Max, just a little bit slower overall
The 3840px is exactly 4K. While I have a 4K monitor disconnecting it and rebooting doesn't change the steep drop point (MBP has a 3456x2234 screen). Even assuming that the 4K resolution is cached somewhere this doesn't explain the same drop on the iPad which has nothing to do with 4K screens.
I also checked what actually takes time in Safari and it seems that the rendering to the screen and not the actual drawing on canvas is the culprit. That will explain why using buffering gives such bad results. The times for drawing are actually slower on Chromium-based browsers, yet the times are hardly correlated to anything:
Why such a large canvas?!
To avoid X-Y problem situation, I'm answering this ahead of time. I stumbled upon this issue in a real-world tests and then prepared the benchmark trying to eliminate all variables from the real-world application. 3840px is perfectly reasonable when used on HiDPI/"retina" displays; with x2 scaling it gives 1920px screen real estate which is quite normal. While the benchmark clips the viewport the real code doesn't (it doesn't seem to make a measurable difference).
Bug?
Is there anything I can do here, or this is a some sort of a limitation/bug of the WebKit/Safari? While testing the Safari doesn't crash but window becomes very unresponsive until the rendering is stopped.
The only solution I see is detecting FPS drop and disabling retina scaling on Safari.

How can I improve performance for multiple images drawn in webgl?

I am programming a simple webgl application, which draws multiple images (textures) over each other. Depending on the scroll position, the scale and opacity of the images is changed, to create a 3D multi-layer parallax effect. You can see the effect here:
http://gsstest.gluecksschmiede.io/ct2/
I am currently working on improving the performance, because the effect does not perform well on older devices (low fps). I lack the in-depth knowledge in webgl (and webgl debugging) to see what is the reason for the bad performance, so I need help. This question only is concerned with desktop devices.
I've tried / I am currently:
always working with the same program and shader pair
the images are in 2000x1067 and already compressed. I need png because of the transparency. I could compress them a little more, but not much. The resolution has to be that way.
already using requestAnimationFrame and non-blocking scroll listeners
The webgl functions I am using to draw the image can be read in this file:
http://gsstest.gluecksschmiede.io/ct2/js/setup.js
Shader code can be found here (just right click -> show sourcecode):
http://gsstest.gluecksschmiede.io/ct2/
Basically, I've used this tutorial/code and did just a few changes:
https://webglfundamentals.org/webgl/lessons/webgl-2d-drawimage.html
I'm then using this setup code to draw the images depending on current scroll position as seen in this file (see the "update" method):
http://gsstest.gluecksschmiede.io/ct2/js/para.js
In my application, about 15 images of 2000x1067 size are drawn on to each other for every frame. I expected this to perform way better than it actually is. I don't know what is causing the bottleneck.
How you can help me:
Provide hints or ideas what code / image compression / whatever changes could improve rendering performance
Provide help on how to debug the performance. Is there a more clever why then just printing out times with console.log and performance.now?
Provide ideas on how I could gracefully degrade or provide a fallback that performance better on older devices.
This is just a guess but ...
drawing 15 fullscreen images is going to be slow on many systems. It's just too many pixels. It's not the size of the images it's the size they are drawn. Like on my MacBook Air the resolution of the screen is 2560x1600
You're drawing 15 images. Those images are drawn into a canvas. That canvas is then drawn into the browser's window and the browser's window is then drawn on the desktop. So that's at least 17 draws or
2560 * 1600 * 17 = 70meg pixels
To get a smooth framerate we generally want to run at 60 frames a second. 60 frames a second means
60 frames a second * 70 meg pixels = 4.2gig pixels a second.
My GPU is rated for 8gig pixels a second so it looks like we might get 60fps here
Let's compare to a 2015 Macbook Air with a Intel HD Graphics 6000. Its screen resolution is 1440x900 which if we calculate things out comes to 1.3gig pixels at 60 frames a second. It's GPU is rated for 1.2gig pixels a second so we're not going to hit 60fps on a 2015 Macbook Air
Note that like everything, the specified max fillrate for a GPU is one of those theoretical max things, you'll probably never see it hit the top rate because of other overheads. In other words, if you look up the fillrate of a GPU multiply by 85% or something (just a guess) to get the fillrate you're more likely to see in reality.
You can test this easily, just make the browser window smaller. If you make the browser window 1/4 the size of the screen and it runs smooth then your issue was fillrate (assuming you are resizing the canvas's drawing buffer to match its display size). This is because once you do that less pixels are being draw (75% less) but all the other work stays the same (all the javascript, webgl, etc)
Assuming that shows your issue is fillrate then things you can do
Don't draw all 15 layers.
If some layers fade out to 100% transparent then don't draw those layers. If you can design the site so that only 4 to 7 layers are ever visible at once you'll go a long way to staying under your fillrate limit
Don't draw transparent areas
You said 15 layers but it appears some of those layers are mostly transparent. You could break those apart into say 9+ pieces (like a picture frame) and not draw the middle piece. Whether it's 9 pieces or 50 pieces it's probably better than 80% of the pixels being 100% transparent.
Many game engines if you give them an image they'll auto generate a mesh that only uses the parts of the texture that are > 0% opaque. For example I made this frame in photoshop
Then loading it into unity you can see Unity made a mesh that covers only the non 100% transparent parts
This is something you'd do offline either by writing a tool or doing it by hand or using some 3D mesh editor like blender to generate meshes that fit your images so you're not wasting time trying to render pixels that are 100% transparent.
Try discarding transparent pixels
This you'd have to test. In your fragment shader you can put something like
if (color.a <= alphaThreshold) {
discard; // don't draw this pixel
}
Where alphaThreashold is 0.0 or greater. Whether this saves time might depend on the GPU since using discarding is slower than not. The reason is if you don't use discard then the GPU can do certain checks early. In your case though I think it might be a win. Note that option #2 above, using a mesh for each plane that only covers the non-transparent parts is by far better than this option.
Pass more textures to a single shader
This one is overly complicated but you could make a drawMultiImages function that takes multiple textures and multiple texture matrices and draws N textures at once. They'd all have the same destination rectangle but by adjusting the source rectangle for each texture you'd get the same effect.
N would probably be 8 or less since there's a limit on the number of textures you can in one draw call depending on the GPU. 8 is the minimum limit IIRC meaning some GPUs will support more than 8 but if you want things to run everywhere you need to handle the minimum case.
GPUs like most processors can read faster than they can write so reading multiple textures and mixing them in the shader would be faster than doing each texture individually.
Finally it's not clear why you're using WebGL for this example.
Option 4 would be fastest but I wouldn't recommend it. Seems like too much work to me for such a simple effect. Still, I just want to point out that at least at a glance you could just use N <div>s and set their css transform and opacity and get the same effect. You'd still have the same issues, 15 full screen layers is too many and you should hide <div>s who's opacity is 0% (the browser might do that for you but best not to assume). You could also use the 2D canvas API and you should see similar perf. Of course if you're doing some kind of special effect (didn't look at the code) then feel free to use WebGL, just at a glance it wasn't clear.
A couple of things:
Performance profiling shows that your problem isn't webgl but memory leaking.
Image compression is worthless in webgl as webgl doesn't care about png, jpg or webp. Textures are always just arrays of bytes on the gpu, so each of your layers is 2000*1067*4 bytes = 8.536 megabytes.
Never construct data in your animation loop, you do, learn how to use the math library you're using.

HTML5 cross-browser PNG sequence for 3D

I'm in the middle of creating a 3D video slot machine game in HTML5.
The game is great and it works perfectly in Firefox. It uses PNG sequences to give a 3D look to the characters within the game and the effects. The total amount of PNG's is 550 (70 per animation).
The main problem comes with Safari (Desktop, iPad and iPhone). When I load in the longer animations, which are over 100 PNG's, the framerate drops dramatically to around 4. I assume this is because of the Safari image memory not being able to hold 550 images well - despite it only totaling about 10mb.
Given that file size is also critical, as it's a web game, I considered swapping all the PNG's to GIF's in order to roughly half the size - However, before I embark on this journey, I figured the same thing would happen because of the number of images.
So the question here is - For a PNG sequence style game, what would be the best cross-browser compatible way of doing this in HTML5?
My only thought so far is to have a spritesheet per animation, placed into a div and moved on the left/top. Alternatively, could it be an issue with the way im preloading the images?
Unfortunately I'm unable to show any source code.

HTML5 Canvas fillRect slow

I'm trying to write a simple raycaster using an HTML5 canvas and I'm getting abysmal framerates. Firefox's profiler reports 80% of my runtime is spent in context2d.fillRect(), which I'm using per column for floors and ceilings and per pixel on the textured walls. I came across this question, and found that fillRect was faster than 1x1 px pictures by 40% on chrome and 4% on firefox. They mention how it's still calculating alpha stuff, although I'd assume if the alpha is 1 it'd have a pass through for opaque rendering? Are there any other methods for doing a lot of rectangle and pixel blitting with javascript I should try out?
A solution I have found to this is to put the fillRect() call inside a path each time you call it:
canvasContext.beginPath();
canvasContext.rect(1, 1, 10, 10);
canvasContext.fill();
canvasContext.closePath();
It seems that the call to rect() adds to the path of the previous rect() call, if used incorrectly this can lead to a memory leak or a buildup of resource usage.
There are two solutions that you could try in order to reduce the CPU usage of your renderings.
Try using requestAnimationFrame in order for your browser to render your canvas graphics whenever it is ready and especially only when the user has the canvas tab opened in the browser. More info: https://developer.mozilla.org/en-US/docs/Web/API/window.requestAnimationFrame
Second solution, depending on whether part or all of your content is dynamic, is to pre-render portions of your canvas on other "hidden" canvas in memory with JavaScript (for instance you split your main canvas in 4 sub canvas then only have to draw 4 elements on screen).
PS: If you're on a mac using Firefox combined with multiple canvas renderings will boost your CPU usage compared to Chrome
Hope this helps

window.pixelRatio not working in Opera. Any alternative?

I have been working on making our CMS export valid content for mobile devices. One of the problems we encountered was that newer devices, like the iphone4 have a higher resolution display so we needed to find a way to render the same page correctly on older devices and newer that use a 300dpi display. So far we used javascript and window.devicePixelRatio in order to get the dpi resolution, but it turns out this is not working in opera(?) and opera mobile.
Any suggestetions or maybe a defferent approach? I researched a bit but was unable to find somethig.
Thanks
I think you might misunderstand what devicePixelRatio is really telling you — surprise, a pixel is not a pixel!
When you specify a pixel size in CSS, you're not necessarily specifying a size in physical pixels. Instead the CSS px unit is actually a "device-independent pixel" (DIP), which is relative to the device's DPI:
Pixel units are relative to the resolution of the viewing device, i.e., most often a computer display. If the pixel density of the output device is very different from that of a typical computer display, the user agent should rescale pixel values.
The reference pixel is defined as 96dpi (Windows' default DPI setting), so on your computer, it is true that 1 DIP (CSS px) = 1 physical screen pixel. Additionally, even though older iOS devices have a physical DPI of 163, they still use 1 DIP = 1 pixel. However, on iPhone 4's doubled resolution of 326 DPI, 1 DIP = 2 screen pixels, which is what devicePixelRatio = 2 is telling you.
Thus, speaking strictly of the difference between the iPhone 3 and iPhone 4, a HTML element with the style { width:100px; height:100px; } should render at roughly the same size on older devices and on the higher-DPI iPhone 4 since it rescales the pixel values.
So there is no reason you sould have to use script to account for high-DPI devices; it should just work.
Opera Mobile doesn't have devicePixelRatio yet as it is pretty much a none-standard extension added by Apple. We are considering adding this however, and may be in the next release of Opera Mobile if we do. You don't need to use JS for this however. It should work in a Media Query too with device-pixel-ratio (with vendor prefixes).
HI all,
josh3736 gave a very nice and concise answer to the css vs device pixel issue. Just wanted to add that it does seem to be an issue with large, high definition images which may need to be served more specifically for individual dpi or ppi device spec's. Searching google, I found that others are (attempting to?) using the device-pixel-ratio as a base to resize images smaller for high ppi displays, while keeping the original high def. image available for these devices and providing low def. images for mobiles without high dpi displays. That way the image is of higher quality for these devices, yet looks sized the same relative to the rest of the page across the spectrum of user devices.
The ability to zoom seems to make this more useful I gather, since a user can zoom in on high def. images and get increased detail. Of coarse this does add another layer of complexity to deal with, but it may be important to address as we have more and more new high end devices entering the markets. Still looking into this, so post back if anyone has good examples.

Categories