I'm in the middle of creating a 3D video slot machine game in HTML5.
The game is great and it works perfectly in Firefox. It uses PNG sequences to give a 3D look to the characters within the game and the effects. The total amount of PNG's is 550 (70 per animation).
The main problem comes with Safari (Desktop, iPad and iPhone). When I load in the longer animations, which are over 100 PNG's, the framerate drops dramatically to around 4. I assume this is because of the Safari image memory not being able to hold 550 images well - despite it only totaling about 10mb.
Given that file size is also critical, as it's a web game, I considered swapping all the PNG's to GIF's in order to roughly half the size - However, before I embark on this journey, I figured the same thing would happen because of the number of images.
So the question here is - For a PNG sequence style game, what would be the best cross-browser compatible way of doing this in HTML5?
My only thought so far is to have a spritesheet per animation, placed into a div and moved on the left/top. Alternatively, could it be an issue with the way im preloading the images?
Unfortunately I'm unable to show any source code.
Related
I'm building a creative simulation in p5.js by employing double pendula. Here is the link to the stripped-down version of the code (I'm not sure it can be made smaller):
https://github.com/amzon-ex/amzon-ex.github.io/tree/master/dp-sketch-stripped
Here is where you can see the animation: https://amzon-ex.github.io/dp-sketch-stripped/dp-sketch.html
When I run this on my laptop (Windows 10, MS Edge 88) I get about 55-60 fps, which is what I want. However, when I run the same sketch on a mobile device (Android, Chrome 88) the performance is very poor and I hardly get 5-6 fps. I do not understand what is complicated about my code that contributes to such a poor performance. I have ran other kinds of p5.js sketches before on mobile and they've worked well.
Any insight is appreciated.
Since I have a few files in the project link I provided, here's a quick guide (there is absolutely no need to read everything):
sketch.js is the key file, which builds the animation. It takes an image, builds an array filled with brightness values from the image (array lumamap, in setup()). Then I draw trails for my pendula where the brightness of the trail at any pixel corresponds to the brightness value in lumamap.
dp-sketch.html is the HTML container for the p5 sketch.
libs/classydp.js houses the DoublePendulum class which describes a double pendulum object.
As I've found out with some help, the issue is tied to varying pixel density on different devices. As mobile devices have higher pixel density compared to desktops/laptops, my 500x500 canvas becomes much bigger on those displays, hence leading to many more pixels to render. Coupled with the fact that mobile processors are weaker in comparison to desktop processors, the lag/low framerate is expected.
This can be avoided by setting pixelDensity(1) in setup(); which avoids rendering larger canvases on dense-pixel devices.
I am programming a simple webgl application, which draws multiple images (textures) over each other. Depending on the scroll position, the scale and opacity of the images is changed, to create a 3D multi-layer parallax effect. You can see the effect here:
http://gsstest.gluecksschmiede.io/ct2/
I am currently working on improving the performance, because the effect does not perform well on older devices (low fps). I lack the in-depth knowledge in webgl (and webgl debugging) to see what is the reason for the bad performance, so I need help. This question only is concerned with desktop devices.
I've tried / I am currently:
always working with the same program and shader pair
the images are in 2000x1067 and already compressed. I need png because of the transparency. I could compress them a little more, but not much. The resolution has to be that way.
already using requestAnimationFrame and non-blocking scroll listeners
The webgl functions I am using to draw the image can be read in this file:
http://gsstest.gluecksschmiede.io/ct2/js/setup.js
Shader code can be found here (just right click -> show sourcecode):
http://gsstest.gluecksschmiede.io/ct2/
Basically, I've used this tutorial/code and did just a few changes:
https://webglfundamentals.org/webgl/lessons/webgl-2d-drawimage.html
I'm then using this setup code to draw the images depending on current scroll position as seen in this file (see the "update" method):
http://gsstest.gluecksschmiede.io/ct2/js/para.js
In my application, about 15 images of 2000x1067 size are drawn on to each other for every frame. I expected this to perform way better than it actually is. I don't know what is causing the bottleneck.
How you can help me:
Provide hints or ideas what code / image compression / whatever changes could improve rendering performance
Provide help on how to debug the performance. Is there a more clever why then just printing out times with console.log and performance.now?
Provide ideas on how I could gracefully degrade or provide a fallback that performance better on older devices.
This is just a guess but ...
drawing 15 fullscreen images is going to be slow on many systems. It's just too many pixels. It's not the size of the images it's the size they are drawn. Like on my MacBook Air the resolution of the screen is 2560x1600
You're drawing 15 images. Those images are drawn into a canvas. That canvas is then drawn into the browser's window and the browser's window is then drawn on the desktop. So that's at least 17 draws or
2560 * 1600 * 17 = 70meg pixels
To get a smooth framerate we generally want to run at 60 frames a second. 60 frames a second means
60 frames a second * 70 meg pixels = 4.2gig pixels a second.
My GPU is rated for 8gig pixels a second so it looks like we might get 60fps here
Let's compare to a 2015 Macbook Air with a Intel HD Graphics 6000. Its screen resolution is 1440x900 which if we calculate things out comes to 1.3gig pixels at 60 frames a second. It's GPU is rated for 1.2gig pixels a second so we're not going to hit 60fps on a 2015 Macbook Air
Note that like everything, the specified max fillrate for a GPU is one of those theoretical max things, you'll probably never see it hit the top rate because of other overheads. In other words, if you look up the fillrate of a GPU multiply by 85% or something (just a guess) to get the fillrate you're more likely to see in reality.
You can test this easily, just make the browser window smaller. If you make the browser window 1/4 the size of the screen and it runs smooth then your issue was fillrate (assuming you are resizing the canvas's drawing buffer to match its display size). This is because once you do that less pixels are being draw (75% less) but all the other work stays the same (all the javascript, webgl, etc)
Assuming that shows your issue is fillrate then things you can do
Don't draw all 15 layers.
If some layers fade out to 100% transparent then don't draw those layers. If you can design the site so that only 4 to 7 layers are ever visible at once you'll go a long way to staying under your fillrate limit
Don't draw transparent areas
You said 15 layers but it appears some of those layers are mostly transparent. You could break those apart into say 9+ pieces (like a picture frame) and not draw the middle piece. Whether it's 9 pieces or 50 pieces it's probably better than 80% of the pixels being 100% transparent.
Many game engines if you give them an image they'll auto generate a mesh that only uses the parts of the texture that are > 0% opaque. For example I made this frame in photoshop
Then loading it into unity you can see Unity made a mesh that covers only the non 100% transparent parts
This is something you'd do offline either by writing a tool or doing it by hand or using some 3D mesh editor like blender to generate meshes that fit your images so you're not wasting time trying to render pixels that are 100% transparent.
Try discarding transparent pixels
This you'd have to test. In your fragment shader you can put something like
if (color.a <= alphaThreshold) {
discard; // don't draw this pixel
}
Where alphaThreashold is 0.0 or greater. Whether this saves time might depend on the GPU since using discarding is slower than not. The reason is if you don't use discard then the GPU can do certain checks early. In your case though I think it might be a win. Note that option #2 above, using a mesh for each plane that only covers the non-transparent parts is by far better than this option.
Pass more textures to a single shader
This one is overly complicated but you could make a drawMultiImages function that takes multiple textures and multiple texture matrices and draws N textures at once. They'd all have the same destination rectangle but by adjusting the source rectangle for each texture you'd get the same effect.
N would probably be 8 or less since there's a limit on the number of textures you can in one draw call depending on the GPU. 8 is the minimum limit IIRC meaning some GPUs will support more than 8 but if you want things to run everywhere you need to handle the minimum case.
GPUs like most processors can read faster than they can write so reading multiple textures and mixing them in the shader would be faster than doing each texture individually.
Finally it's not clear why you're using WebGL for this example.
Option 4 would be fastest but I wouldn't recommend it. Seems like too much work to me for such a simple effect. Still, I just want to point out that at least at a glance you could just use N <div>s and set their css transform and opacity and get the same effect. You'd still have the same issues, 15 full screen layers is too many and you should hide <div>s who's opacity is 0% (the browser might do that for you but best not to assume). You could also use the 2D canvas API and you should see similar perf. Of course if you're doing some kind of special effect (didn't look at the code) then feel free to use WebGL, just at a glance it wasn't clear.
A couple of things:
Performance profiling shows that your problem isn't webgl but memory leaking.
Image compression is worthless in webgl as webgl doesn't care about png, jpg or webp. Textures are always just arrays of bytes on the gpu, so each of your layers is 2000*1067*4 bytes = 8.536 megabytes.
Never construct data in your animation loop, you do, learn how to use the math library you're using.
I am trying to change the background of video to transparent. I have red and use couple of solution but they did not work quite will with video I use from youtube.
The solution I use are
1) https://github.com/m90/seeThru
2) https://jakearchibald.com/scratch/alphavid/
The both the solution is working fine with their demo video but not with the video I use.
If someone can explain about how is it working. So might be I can able to fix the issue.
Here is the result which I am getting.
There are several methods that are used. The best effect is the use of a precomputed mask (done during post production) that is added to the bottom half of the video. The video is then rendered to a canvas and the two half's combined to create the transparent frame by getting the pixel data and moving the a colour channel to the alpha channel.
The process is very CPU (for Javascript) intensive and only good for low resolution video. Plus you need to create the animated mask and double the size of the video. If you view the second example in the standard player you will see the other half.
The Second method used by the first example is to compute the alpha on the fly. This is even more CPU intensive but very simple to do. Again if you have low resolution video and a fast device it is practical. You are then faced with the problem of setting the thresholds for transparency, because videos use lossy compression you will have trouble with edges and the threshold colour.
Your best bet is to use a WebGL solution (if you don't want to do it in post production) and do the masking on the GPU where you can have a more complex algorithm and some chronological filtering as well. Though it will depend on the video quality, the type of background (single colour or static background). You could also find a asm.js solution that will work better. I remember seeing one some time ago, I will provide the link if I can find it.
Unfortunately JavaScript is not up to the job of high quality matte effects in realtime for the time being. It is a shame as the 2D API would only need a single additional global composite operation, "chroma-alpha" (move the mean source RGB to the destination alpha) that would open up so many addition canvas effects (which god do I pray to for that to happen?). For now you have to move every pixel in javascript
I am currently building a small turnedbased maze game. The maze is 2D and build up by tiles - much similar to Zelda map kinda games.
I have minimum 256 different tiles to select from and I will be reusing them a lot for different areas, such as buildings, trees, walls, grass, water etc.
My plan is to make it run as smooth as I can without using any other framework than jQuery for Ajax and DOM manipulation.
The game-engine is server-side, so the client browser is not performing any of the AI-logics etc. Its just requesting the "current state" of the game, then receiving the objects/enemies etc. But it has the levelmap downloaded from the start. The level could be like 256x256 tiles and each tile could be 32x32 pixels... havent decided on the block size yet, but I think thats the most optimal. Might be larger though.
The player viewport will probarbly take up around 9x9 tiles or perhaps 15x15 tiles. Then you will scroll to see action and move around.
As its turnbased, so no worries for "screen flickering 60 fps" a single flicker here and there is okay, but not all the time.
My question is: Whats the most optimal way to generate the map in the client?
1) should I transfer the map as "data" (json or similar) and then generate DIV's with CSS to build the tilemap in the client. This will minimize the bandwidth, but I fear that the DOM in the client will be at hardwork when I scroll the map?
or
2) should I prerender the full tilemap or larger parts (squares) of the map so that the browser has less DIV's to handle. This will kill reuseability for the maps as they are hardly patterned, so its just going to be "pure graphic download" + the tilemap information needed for the game collision etc. It ought to be faster to scroll for the browser.
I cant figure out if it will take more or less resources once downloaded with the second approach, as the client/browser will have to allocate memory for all the larger "prerendered" blocks instead. But if its the "data optimal" solution from 1) its also going to be generating a lot of DIV's with class' and render these in the browser DOM somehow.
SVG and canvas is currently not an option, unless someone can provide me with a fast way of converting alpha transparent PNGs' to canvas and still being able to scroll around with a viewport (with as many FPS as possible though, aka more than 15 fps)
I'm trying to make a sick slider like this http://learnlakenona.com/
But I'm having major performance problems in chrome (it seems their site doesn't even run the slider on chrome). I'm displaying the images by using background-image CSS on divs, adding them as img tags caused even more lag.
I disabled all javascript and noticed I was still getting lag just with having some huge images sitting on top of eachother.
Here's just some images sitting there. Try changing the size of the panels, redrawing the images locks it up. (sometimes it runs okay for a bit, it's weird)
http://jsfiddle.net/LRynj/2/
Does anyone have an idea how I can get acceptable performance on a slider like this!? (images need to be pretty big)
Optimize your image assets by resizing them in an image editor or lowering the quality.
You can also use free tools online:
http://sixrevisions.com/tools/8-excellent-tools-for-optimizing-your-images/
The site uses HUGE transparent PNGs for a parallax effect, so:
you'll have to reduce the total weight of your images: you can try to convert these PNGs to PNG-8 if they aren't already. Quantizations algorithms and such do a very good job at reducing images to 256 colors without too much degradation of quality.
you've to keep transparency for the parallax effect. Both types of transparency are compatible with PNG-8: GIF-like opaque/tranparent bit of transparency on each pixel and "PNG-32"-like (PNG-24 + 8 bits of transparency) where each pixel has 256 levels of transparency. Adobe Fireworks is particularly good at creating such "PNG-8+alpha" images; converters also exist but they're not perfect (depends of your images).
loading the minimum part of your image that is seen immediately and only later the rest of your 9600px-wide (!) would greatly reduce the time to first view. You can achieve that by slicing your images by chunks of 1920 or 2560px, load the viewed part of the 3 images as background images and via a script that would execute only after the DOM is ready load all the other parts. Not too much parts because that would imply more assets to download but still not a 4MB sprite :) Bonus: by slicing your 3 images to PNG-8, each PNG will have its own 256-colors palette and overall quality will be better (not as perfect as a PNG-24 but better than a single 9600px PNG-8 that could only use 256 colors total. More shades of grey for the man suit in one PNG, more shiny colors for the ball-and-stick molecule, etc
EDIT: and don't ever try to load that on a smartphone! I do like Media Queries and avoid UA detection because it isn't needed most of the time and never perfect but that's one of the cases (choosing to load 8MB of images) where it'll be welcome... Ignoring users that don't have optic fiber and won't wait for your site to display is another issue not related to your question.