So I'm using the phaser webgl lighting
this.lights.enable();
this.lights.setAmbientColor(0x808080);
this.spotlight = this.lights.addLight(2040, -200, 20000).setIntensity(20);
I'm trying to cover really large distance, seen in that 20000px radius, basically an in-game sun, and I face the issue of actually making the large area bright, but not have the issue of the intensity making it so bright at the top that you can only see white over there
Well if you want a lighting effect, the easy way like by all 2D games is to fake it. There is a brilliant article on how to make a day/night cycle in phaser, it is abit dated, but it explains exactly how you can achieve this.
And more or less something like, discribed in the article is a easy and good solution. Basiclly:
the sun is just an image
the darkness and brightness can be achieved with overlays and alpha values (use gradients in the overlays if you want a fading effect)
Related
I am working on a project for a client and we are finding that some of the transparent logos have a very ugly looking dark border around them when they are used in threejs. I have tried so many things with no luck so I would love help getting the alpha to look nicer.
Threejs vs the supplied image:
It is very faint but when you zoom in (which they can to an extent in the application) you can see the border:
Things I have tried:
Setting texture's min + mag filters to LinearFilter/NearestFilter. This is the most common suggestion but you can see in my codesandbox that this does not help. If I set it to NearestFilter then the logos start to become pixelated and alias when the camera moves around.
I changed the blend modes of the standard material and I got weird colors on the edges.
I wrote my own custom shader that blends between the image color and white based on the supplied image's alpha but I still get a weird color leaking through.
I played with the alphaTest value but this ends up causing the edges to end abruptly/not look great.
I demonstrate all of my approaches here:
https://codesandbox.io/s/interesting-wozniak-povk1?file=/src/App.js
I think that my shader is close but not perfect. I would really appreciate any advice on the right way to approach and solve this problem.
You need to increase the resolution of the image, the mesh at half the scale looks better with that resolution on the png
So I found out that texturing planets can be really hard. I created a 4096k image and wrapped it around a high poly sphere. Apart from the possible memory management performance issue that comes with a 3-4 mb image, the texture looks bad / pixelated on a close up (orbital) view.
I was thinking that I could maybe increase the resolution significantly by splitting up the picture. Then create a low, medium and high version of each section. If the camera viewport is very close to that particular section then render the high resolution image. If far away remove image from memory and apply low or medium version.
To be honest I am not sure what strategy to use to render high quality planets. Should I maybe avoid textures and just use height maps and color the planet with Javascript? Same thing for the clouds. Should I create a sphere with an alpha map or should I use shaders?
As you can see this is the issue im having and hopefully you could enlighten me. Performance with Webgl / three.js has significantly improved over time but since this is all done within the browser I assume thinking about the right solution is vital in the long term.
You're going to need to implement a lod system. lod = "level of detail" and in 3d it means generally means switching from high-polygon to low-polygon models but in general it means doing anything to switch high detail to low-detail
Because you can't make textures 1000000x100000 which is pretty much what you'd need to do to get the results you want you'll need build a sphere out of multiple sections and texture each one separately. How many sections depends on how close you want to be able to zoom in. Google Maps has millions of sections. At the same time, if you can zoom out to see the whole planet (like you can in Google Maps) you can't draw millions of sections. Instead you'd switch to a single sphere. That process is called "LODing"
There is no "generic" solution. You generally need to write your own for your specific case.
In the case of something like Google Maps what they most likely do is have several levels of detail. A single sphere when you can see the whole planet. A sphere made of say 100 pieces when slightly closer. A sphere made of 1000 pieces when closer, A sphere made of 10000 pieces when closer, etc etc. They also only show the pieces you can see. Deciding and managing which pieces to show with a generic solution would be way to slow (look at millions of pieces every frame) but you, as the application writer know what pieces are visible so you can make sure only those pieces are in your scene.
Another thing that people often do is fade between LODs. So when Google Maps is showing the single mesh sphere when all the say zoomed out and they transition to the 100 piece or 1000 piece sphere they crossfade between the two.
Some examples of lodding
http://acko.net/blog/making-worlds-1-of-spheres-and-cubes/
http://vterrain.org/LOD/spherical.html
You could create a sphere with different topology.
Say you create 6 square planes, arranged in such a way that they form a box. You can tesselate these planes to give the sphere enough resolution. These planes would have UV mapping that would work similar to cube-mapping, each will hold a cubemap face.
Then you loop through all the vertices, take the position vector and normalize it. This will yield a sphere.
You can convert an equirectangular panorama image into a cubemap. I think it will allow you to to get more resolution and less stretching for cheap.
For starters, the 4096 x 4096 should be 4096x2048 on the default sphere with equirectangular, but the newly mapped sphere can hold 6 x 4096 x 4096 with no stretching, and can be drawn in 6 draw calls.
Further splitting these could yield a good basis for what gman suggests.
I have a webgl application, I've written using threejs. But the FPS is not good enough on some of my test machines. I've tried to profile my application using Chrome's about:tracing with the help from this article : http://www.html5rocks.com/en/tutorials/games/abouttracing/
It appears that the gpu is being overloaded. I also found out that my FPS falls drastically when I have my entire scene in the camera's view. The scene contains about 17 meshes and a single directional light source. Its not really a heavy scene. I've seen much heavier scenes get render flawlessly on the same GPU.
So, what changes can I make in the scene to make it less heavy, without completely changing it? I've already tried removing the textures? But that doesn't seem to fix the problem.
Is there a way to figure out what computation threejs is pushing on to the GPU? Or would this be breaking the basic abstraction threejs gives?
What are general tips for profiling GPU webgl-threejs apps?
There are various things to try.
Are you draw bound?
Change your canvas to 1x1 pixel big. Does your framerate go way up? If so you're drawing too many pixels or your fragment shaders are too complex.
To see if simplifying your fragment shader would help use a simpler shader. I don't know three.js that well. Maybe the Basic Material?
Do you have shadows? Turn them off. Does it go faster? Can you use simpler shadows? For example the shadows in this sample are fake. They are just planes with a circle texture.
Are you using any postprocessing effects? Post processing effects are expensive, specially on mobile GPUs.
Are you drawing lots of opaque stuff? If so can you sort your drawing order so you draw front to back (close to far). Not sure if three.js has an option to do this or not. I know it can sort transparent stuff back to front so it should be simple to reverse the test. This will make rendering go quicker assuming you're drawing with the depth test on because pixels in the back will be rejected by the DEPTH_TEST and so won't have the fragment shader run for them.
Another thing you can do to save bandwidth is draw to a smaller canvas and have it be stretched using CSS to cover the area you want it to appear. Lots of games do this.
Are you geometry bound?
You say you're only drawing 17 meshes but how big are those meshes? 17 12 triangle cubes or 17 one million triangle meshes?
If you're geometry bound can use simplify? If the geometry goes far into the distance can you split it and use lods? see lod sample.
I've been working on a WebGL project that runs on top of the Three.js library. I am rendering several semi-transparent meshes, and I notice that depending on the angle you tilt the camera, a different object is on top.
To illustrate the problem, I made a quick demo using three semi-transparent cubes. When you rotate the image past perpendicular to the screen, the second half of the smallest cube "jumps" and is no longer visible. However, shouldn't it still be visible? I tried adjusting some of the blending equations, but that didn't seem to make a difference.
What I'm wondering is whether or not this is a bug in WebGL/Three, or something I can fix. Any insight would be much appreciated :)
Well, that's something they weren't able to solve when they invented all this hardware accelerated graphics business and sounds like we'll have to deal with this for a long while.
The issue here is that graphic cards do not sort the polygons, nor objects. The graphics card is "dumb", you tell it to draw an object and it will draw the pixels that represent it and also, in another non-visible "image" called zbuffer (or depthbuffer), will draw the pixels that represent the object but instead of color it will draw the distance to the camera for each pixels. Any other objects that you draw afterwards, the graphics card will check if the distance to the camera for each pixel, and if it's farther, it won't draw it (unless you disable the check, that is).
This speeds up things a lot and gives you nice intersections between solid objects. But it doesn't play well with transparency. Say that you have 2 transparent objects and you want A to be drawn behind B. You'll need to tell the graphics card to draw A first and then B. This works fine as long as they're not intersecting. In order to draw 2 transparent objects intersecting then the graphics would have to sort all the polygons, and as the graphics card doesn't do that, then you'll have to do it.
It's one of these things that you need to understand and specifically tweak for your case.
In three.js, if you set material.transparent = true we'll sort that object so it's drawn before (earlier) other objects that are in front. But we can't really help you if you want to intersect them.
I'm using javascript and THREE.js 3d engine,
and I want to do a fog effect,
here's an example http://mrdoob.github.com/three.js/examples/webgl_geometry_terrain_fog.html
but it only support WebGL,
so is there any way to simulate fog effect, or a blur effect with javascript and canvas?
thanks alot.
Three.js is WebGL. The example you're looking at is created by the same person who made three.js.
Three.js supports fog already with scene.fog.
https://github.com/mrdoob/three.js/wiki/API-Reference#wiki-THREE.Fog
below is for people that searched for fog/blur in Canvas looking for 2D
There are a bunch of places that have implemented various blur effects. The pixastic library has a lot of such effects implemented.
Fog is something different, though. There isn't a universal definition and it really depends on what you're looking for. One way would be to set the globalAlpha of the canvas to something like 0.3 and then draw perlin noise on the locations that you want "fog" to appear. Note that the color gradient of the noise that you most likely want is transparent to dark gray.