an internal counter in webgl? - javascript

I am learning WebGL by doing a simple drawing: a horizontal line and a vertical line, alternatively every 10 frames (i.e 10 frames display a horizontal line, then the next 10 frames display a vertical line). I got that going by keeping a counter in js code, then give the vertex shader proper the coords on every frame. Is there a way to let the WebGL program to handle this counter, instead of js? Is it possible to pass 4 points (of the 2 lines) to the WebGL program once, and make it handle the counting with some kind of variable that persists through every main interation?
I hope I can demonstrate better with the below code. The counter variable is what I am hoping for
attribute vec3 coordinates;
int counter = 0;
void main(void) {
counter = counter + 1;
if (counter < 10){
gl_Position = vec4(coordinates[0], coordinates[1], coordinates[2], 1.0);
} else {
gl_Position = vec4(coordinates[3], coordinates[4], coordinates[5], 1.0);
}
if (counter >= 20){
counter = 0;
}
}
If that is not possible, please tell me how to handle this problem? Is passing the right vertices from js code the way to go?
Thank you very much for your attention. Any help would be appreciated.

You need to pass the counter from js as a uniform.
Your code will not work since counter is a local variable to the vertex shader and not sharing with other. Even it is shared, keep in mind that the vertex shader called in number of vertices time in any order.
Best practice is keeping shaders for rendering only and those logic in the application.

In WebGL1 there is no counter. You'd have to pass one in. You can do this by filling a buffer with an increasing number and pass that in as an attribute to your vertex shader.
In WebGL2 there is a built in counter, gl_VertexID as well as gl_InstanceID
Whether or not you should be using counters depends on your use case. The normal way to draw a lot of points is to pass the points in as data via attributes.
The normal way to draw consecutive points right next to each other is to pass in vertices that generate a triangle that covers the points you want rendered.
Using the counters, either your own in WebGL1 or the built in ones in WebGL2 is fairly uncommon.
GLSL shaders have no state between one iteration and the next so your counter example won't work.
If you're new to WebGL might I suggest these articles

Related

WebGL - Set multiple vertices

I'm trying to translate some TypeScript code into a vertex shader to use with WebGL. My goal is to draw the bitangent lines of two circles. I have a function to calculate the tangent points here, https://jsfiddle.net/Zanchi/4xnp1n8x/2/ on line 27. Essentially, it returns a tuple of points with x and y values.
// First circle bottom tangent point
const t1 = {
x: x1 + r1 * cos(PI/2 - alpha),
y: y1 + r1 * sin(PI/2 - alpha)
}; //... and so on
I know I can do the calcuation in JS and pass the values to the shader via an attribute, but I'd like to leverage the GPU to do the point calculations instead.
Is it possible to set multiple vertices in a single vertex shader call, or use multiple values calculated in the first call of the shader in subsequent calls?
Is it possible to set multiple vertices in a single vertex shader call
No
or use multiple values calculated in the first call of the shader in subsequent calls?
No
A vertex shader outputs 1 vertex per iteration/call. You set the number of iterations when you call gl.drawArrays (gl.drawElements is more complicated)
I'm not sure you gain much by not just putting the values in an attribute. It might be fun to generate them in the vertex shader but it's probably not performant.
In WebGL1 there is no easy way to use a vertex shader to generate data. First off you'd need some kind of count or something that changes for each iteration and there is nothing that changes if you don't supply at least one attribute. You could supply one attribute with just a count [0, 1, 2, 3, ...] and use that count to generate vertices. This is what vertexshaderart.com does but it's all for fun, not for perf.
In WebGL2 there is the gl_VertexID built in variable which means you get a count for free, no need to supply an attribute. In WebGL2 you can also use transform feedback to write the output of a vertex shader to a buffer. In that way you can generate some vertices once into a buffer and then use the generated vertices from that buffer (and therefore probably get better performance than generating them every time).

Object pooling (isometric tiling) on phaser and camera positioning

I am new to Phaser and I am currently having a hard time in generating a tilemap with the aid of the phaser isometric plugin. I am also having trouble with understanding some of the concepts related with the Phaser world, game, and the camera which make the part of generating the tilemap correctly even harder. I have actually noticed that this problem seems to be an obstacle to Phaser newbies, like myself, and having it correctly explained would certainly help to fight this.
Getting to the matter:
I have generated a tilemap using a for loop using single grass sprites. The for loop is working correctly and I also implemented a function where I can specify the number of tiles I want to generate.
{
spawnBasicTiles: function(half_tile_width, half_tile_height, size_x, size_y) {
var tile;
for(var xx = 0; xx < size_x; xx += half_tile_width) {
for(var yy = 0; yy < size_y; yy += half_tile_height) {
tile = game.add.isoSprite(xx, yy, 0, 'tile', 0, tiles_group);
tile.checkWorldBounds = true;
tile.outOfBoundsKill = true;
tile.anchor.set(0.5, 0);
}
}
}
}
So the process of generating the static grass tiles is not a problem. The problem, and one of the reasons I am trying to getting the object pooling to work correctly is when the tiles number is superior to 80 which has a dramatic impact on the game performance. Since I aim for making HUGE, auto-generating maps that are rendered according to the player character position the object pooling is essential. I have created a group for these tiles and added the properties I thought that would be required for having the tiles that are out of bounds to not be rendered(physics, world bounds...). However, after many attempts I concluded that even the tiles that are out of bounds are still being generated. I also used another property rather than add.isoSprite to generate the tiles, which was .create but made no difference. I guess no tiles are being "killed".
Is this process correct? What do I need to do in order to ONLY generate the tiles that appear on camera (I assume the camera is the visible game rectangle) and generate the rest of them when the character moves to another area assuming the camera is already tracking the character movement?
Besides that, I am looking to generate my character in the middle of the world which is the middle of the generated grass tilemap. However, I am having a hard time doing that too. I think the following concepts are the ones I should play with in order to achieve that, despite not being able to:
.anchor.set()
game.world.setBounds(especially this one... it doesn't seem to set where I order to);
Phaser.game ( I set its size to my window.width and window.height, not having much trouble with this...)
Summing up:
Using the following for loop method of generating tiles, how can I make infinite/almost infinite maps that are generated when the camera that follows my character moves to another region? And besides that, what is the proper way of always generating my character in the middle of my generated grass map and also for the camera to start where the character is?
I will appreciate the help a lot and hope this might be valuable for people in similar situations to mine. For any extra information just ask and if you want the function for generating the tiles by a specific number I will gladly post it. Sorry for any language mistakes.
Thank you very much.
EDIT:
I was able to spawn my character always in the middle of the grass by setting its X and Y to (size/width) and (size/height). Size is the measure of X and Y in px.

Reading data from shader

I'm trying to learn how to take advantage of gpu possibilities for threejs and webgl stuff so im just analysing code to get some patterns, methods how things are done and I need some code explanation.
I found this example: One million particles, which seems to be the easiest one involving calculations made in shaders and spit back out.
So from what I have figured out:
- Data for velocity and position of particles are kept in textures passed to shaders to perform calculations there, and get them back for update
Particles are created randomly on the plane no more than the texture size ?
for (var i = 0; i < 1000000; i++) {
particles.vertices.push(new THREE.Vector3((i % texSize)/texSize,
Math.floor(i/texSize)/texSize , 0))
;
}
I don't see any particles position updates? How is the data from shaders retrieved and updates each particle?
pick()
only passes the mouse position to calculate the direction of particles movement?
why are there 2 buffers? and 8 (4 pairs of fragment and vector) shaders? Is only the one for calculating velocity and position not enough?
how does the shader update the texture? I just see reading from it not writing to it?
Thanks in advance for any explanations!
How the heck have they done that:
In this post, I'll explain how this results get computed nearly solely on the gpu via WebGL/Three.js - it might look a bit sloppy as I'm using integrated graphics of an Intel i7 4770k:
Introduction:
Simple idea to keep everything intra-gpu: Each particle's state will be represented by one texture pixel color value. One Million particles will result in 1024x1024 pixel textures, one to hold the current position and another one that holds the velocities of those particles.
Nobody ever forbid to abuse the RGB color values of a texture for completely different data of 0...255 universe. You basically have 32-bit (R + G + B + alpha) per texture pixel for whatever you want to save in GPU memory. (One might even use multiple texture pixels if he needs to store more data per particle/object).
They basically used multiple shaders in a sequential order. From the source code, one can identify these steps of their processing pipeline:
Randomize particles (ignored in this answer) ('randShader')
Determine each particles velocity by its distance to mouse location ('velShader')
Based on velocity, move each particle accordingly ('posShader')
Display the screen ('dispShader')**
.
Step 2: Determining Velocity per particle:
They call a draw process on 1 Million points which's output will be saved as a texture. In the vertex shader each fragment gets 2 additional varyings named "vUv", which basically determine the x and y pixel positions inside the textures used in the process.
Next step is its fragment shader, as only this shader can output (as RGB values into the framebuffer, which gets converted to a texture buffer afterwards - all happening inside gpu memory only). You can see in the id="velFrag" fragment shader, that it gets an input variable called uniform vec3 targetPos;. Those uniforms are set cheaply with each frame from the CPU, because they are shared among all instances and don't involve large memory transfers. (containing the mouse coordinate, in -1.00f to +1.00f universe probably - they probably also update mouse coords once every FEW frames, to lower cpu usage).
Whats going on here? Well, that shader calculates the distance of that particle to the mouse coordinate and depending on that it alter that particles velocity - the velocity also holds information about the particles flight direction. Note: this velocity step also makes particles gain momentum and keep flying/overshooting mouse position, depending on gray value.
.
Step 3: Updating positions per particle:
So far each particle got a velocity and an previous position. Those two values will get processed into a new position, again being outputted as a texture - this time into the positionTexture. Until the whole frame got rendered (into default framebuffer)and then marked as the new texture, the old positionTexture remains unchanged and can get read with ease:
In id="posFrag" fragment shader, they read from both textures (posTexture and velTexture) and process this data into a new position. They output the x and y position coordinates into the colors of that texture (as red and green values).
.
Step 4: Prime time (=output)
To output the results, they probably took again a million points/vertexes and gave it the positionTexture as an input. Then the vertex shader sets the position of each point by reading the texture's RGB value at location x,y (passed as vertex attributes).
// From <script type="x-shader/x-vertex" id="dispVert">
vec3 mvPosition = texture2D(posTex, vec2(x, y)).rgb;
gl_PointSize = 1.0;
gl_Position = projectionMatrix * modelViewMatrix * vec4(mvPosition,1.0);
In the display fragment shader, they only need to set a color (note the low alpha, causing it to allow 20 particles to stack up to fully light up a pixel).
// From <script type="x-shader/x-fragment" id="dispFrag">
gl_FragColor = vec4(vec3(0.5, 1.0, 0.1), 0.05);
.
I hope this made it clear how this little demo works :-) I am not the author of that demo, though. Just noticed this answer actually became a super duper detailed one - fly through the thick keywords to get the short version.

JavaScript "pixel"-perfect collision detection for rotating sprites using math (probably linear algebra)

I'm making a 2D game in JavaScript. For it, I need to be able to "perfectly" check collision between two sprites which have x/y positions (corresponding to their centre), a rotation in radians, and of course known width/height.
After spending many weeks of work (yeah, I'm not even exaggerating), I finally came up with a working solution, which unfortunately turned out to be about 10,000x too slow and impossible to optimize in any meaningful manner. I have entirely abandoned the idea of actually drawing and reading pixels from a canvas. That's just not going to cut it, but please don't make me explain in detail why. This needs to be done with math and an "imaginated" 2D world/grid, and from talking to numerous people, the basic idea became obvious. However, the practical implementation is not. Here's what I do and want to do:
What I already have done
In the beginning of the program, each sprite is pixel-looked through in its default upright position and a 1-dimensional array is filled up with data corresponding to the alpha channel of the image: solid pixels get represented by a 1, and transparent ones by 0. See figure 3.
The idea behind that is that those 1s and 0s no longer represent "pixels", but "little math orbs positioned in perfect distances to each other", which can be rotated without "losing" or "adding" data, as happens with pixels if you rotate images in anything but 90 degrees at a time.
I naturally do the quick "bounding box" check first to see if I should bother calculating accurately. This is done. The problem is the fine/"for-sure" check...
What I cannot figure out
Now that I need to figure out whether the sprites collide for sure, I need to construct a math expression of some sort using "linear algebra" (which I do not know) to determine if these "rectangles of data points", positioned and rotated correctly, both have a "1" in an overlapping position.
Although the theory is very simple, the practical code needed to accomplish this is simply beyond my capabilities. I've stared at the code for many hours, asking numerous people (and had massive problems explaining my problem clearly) and really put in an effort. Now I finally want to give up. I would very, very much appreciate getting this done with. I can't even give up and "cheat" by using a library, because nothing I find even comes close to solving this problem from what I can tell. They are all impossible for me to understand, and seem to have entirely different assumptions/requirements in mind. Whatever I'm doing always seems to be some special case. It's annoying.
This is the pseudo code for the relevant part of the program:
function doThisAtTheStartOfTheProgram()
{
makeQuickVectorFromImageAlpha(sprite1);
makeQuickVectorFromImageAlpha(sprite2);
}
function detectCollision(sprite1, sprite2)
{
// This easy, outer check works. Please ignore it as it is unrelated to the problem.
if (bounding_box_match)
{
/*
This part is the entire problem.
I must do a math-based check to see if they really collide.
These are the relevant variables as I have named them:
sprite1.x
sprite1.y
sprite1.rotation // in radians
sprite1.width
sprite1.height
sprite1.diagonal // might not be needed, but is provided
sprite2.x
sprite2.y
sprite2.rotation // in radians
sprite2.width
sprite2.height
sprite2.diagonal // might not be needed, but is provided
sprite1.vectorForCollisionDetection
sprite2.vectorForCollisionDetection
Can you please help me construct the math expression, or the series of math expressions, needed to do this check?
To clarify, using the variables above, I need to check if the two sprites (which can rotate around their centre, have any position and any dimensions) are colliding. A collision happens when at least one "unit" (an imagined sphere) of BOTH sprites are on the same unit in our imaginated 2D world (starting from 0,0 in the top-left).
*/
if (accurate_check_goes_here)
return true;
}
return false;
}
In other words, "accurate_check_goes_here" is what I wonder what it should be. It doesn't need to be a single expression, of course, and I would very much prefer seeing it done in "steps" (with comments!) so that I have a chance of understanding it, but please don't see this as "spoon feeding". I fully admit I suck at math and this is beyond my capabilities. It's just a fact. I want to move on and work on the stuff I can actually solve on my own.
To clarify: the 1D arrays are 1D and not 2D due to performance. As it turns out, speed matters very much in JS World.
Although this is a non-profit project, entirely made for private satisfaction, I just don't have the time and energy to order and sit down with some math book and learn about that from the ground up. I take no pride in lacking the math skills which would help me a lot, but at this point, I need to get this game done or I'll go crazy. This particular problem has prevented me from getting any other work done for far too long.
I hope I have explained the problem well. However, one of the most frustrating feelings is when people send well-meaning replies that unfortunately show that the person helping has not read the question. I'm not pre-insulting you all -- I just wish that won't happen this time! Sorry if my description is poor. I really tried my best to be perfectly clear.
Okay, so I need "reputation" to be able to post the illustrations I spent time to create to illustrate my problem. So instead I link to them:
Illustrations
(censored by Stackoverflow)
(censored by Stackoverflow)
OK. This site won't let me even link to the images. Only one. Then I'll pick the most important one, but it would've helped a lot if I could link to the others...
First you need to understand that detecting such collisions cannot be done with a single/simple equation. Because the shapes of the sprites matter and these are described by an array of Width x Height = Area bits. So the worst-case complexity of the algorithm must be at least O(Area).
Here is how I would do it:
Represent the sprites in two ways:
1) a bitmap indicating where pixels are opaque,
2) a list of the coordinates of the opaque pixels. [Optional, for speedup, in case of hollow sprites.]
Choose the sprite with the shortest pixel list. Find the rigid transform (translation + rotation) that transforms the local coordinates of this sprite into the local coordinates of the other sprite (this is where linear algebra comes into play - the rotation is the difference of the angles, the translation is the vector between upper-left corners - see http://planning.cs.uiuc.edu/node99.html).
Now scan the opaque pixel list, transforming the local coordinates of the pixels to the local coordinates of the other sprite. Check if you fall on an opaque pixel by looking up the bitmap representation.
This takes at worst O(Opaque Area) coordinate transforms + pixel tests, which is optimal.
If you sprites are zoomed-in (big pixels), as a first approximation you can ignore the zooming. If you need more accuracy, you can think of sampling a few points per pixel. Exact computation will involve a square/square collision intersection algorithm (with rotation), more complex and costly. See http://en.wikipedia.org/wiki/Sutherland%E2%80%93Hodgman_algorithm.
Here is an exact solution that will work regardless the size of the pixels (zoomed or not).
Use both a bitmap representation (1 opacity bit per pixel) and a decomposition into squares or rectangles (rectangles are optional, just an optimization; single pixels are ok).
Process all rectangles of the (source) sprite in turn. By means of rotation/translation, map the rectangles to the coordinate space of the other sprite (target). You will obtain a rotated rectangle overlaid on a grid of pixels.
Now you will perform a filling of this rectangle with a scanline algorithm: first split the rectangle in three (two triangles and one parallelogram), using horizontal lines through the rectangle vertexes. For the three shapes independently, find all horizontal between-pixel lines that cross them (this is simply done by looking at the ranges of Y values). For every such horizontal line, compute the two intersections points. Then find all pixel corners that fall between the two intersections (range of X values). For any pixel having a corner inside the rectangle, lookup the corresponding bit in the (target) sprite bitmap.
No too difficult to program, no complicated data structure. The computational effort is roughly proportional to the number of target pixels covered by every source rectangle.
Although you have already stated that you don't feel rendering to the canvas and checking that data is a viable solution, I'd like to present an idea which may or may not have already occurred to you and which ought to be reasonably efficient.
This solution relies on the fact that rendering any pixel to the canvas with half-opacity twice will result in a pixel of full opacity. The steps follow:
Size the test canvas so that both sprites will fit on it (this will also clear the canvas, so you don't have to create a new element each time you need to test for collision).
Transform the sprite data such that any pixel that has any opacity or color is set to be black at 50% opacity.
Render the sprites at the appropriate distance and relative position to one another.
Loop through the resulting canvas data. If any pixels have an opacity of 100%, then a collision has been detected. Return true.
Else, return false.
Wash, rinse, repeat.
This method should run reasonably fast. Now, for optimization--the bottleneck here will likely be the final opacity check (although rendering the images to the canvas could be slow, as might be clearing/resizing it):
reduce the resolution of the opacity detection in the final step, by changing the increment in your loop through the pixels of the final data.
Loop from middle up and down, rather than from the top to bottom (and return as soon as you find any single collision). This way you have a higher chance of encountering any collisions earlier in the loop, thus reducing its length.
I don't know what your limitations are and why you can't render to canvas, since you have declined to comment on that, but hopefully this method will be of some use to you. If it isn't, perhaps it might come in handy to future users.
Please see if the following idea works for you. Here I create a linear array of points corresponding to pixels set in each of the two sprites. I then rotate/translate these points, to give me two sets of coordinates for individual pixels. Finally, I check the pixels against each other to see if any pair are within a distance of 1 - which is "collision".
You can obviously add some segmentation of your sprite (only test "boundary pixels"), test for bounding boxes, and do other things to speed this up - but it's actually pretty fast (once you take all the console.log() statements out that are just there to confirm things are behaving…). Note that I test for dx - if that is too large, there is no need to compute the entire distance. Also, I don't need the square root for knowing whether the distance is less than 1.
I am not sure whether the use of new array() inside the pixLocs function will cause a problem with memory leaks. Something to look at if you run this function 30 times per second...
<html>
<script type="text/javascript">
var s1 = {
'pix': new Array(0,0,1,1,0,0,1,0,0,1,1,0),
'x': 1,
'y': 2,
'width': 4,
'height': 3,
'rotation': 45};
var s2 = {
'pix': new Array(1,0,1,0,1,0,1,0,1,0,1,0),
'x': 0,
'y': 1,
'width': 4,
'height': 3,
'rotation': 90};
pixLocs(s1);
console.log("now rotating the second sprite...");
pixLocs(s2);
console.log("collision detector says " + collision(s1, s2));
function pixLocs(s) {
var i;
var x, y;
var l1, l2;
var ca, sa;
var pi;
s.locx = new Array();
s.locy = new Array();
pi = Math.acos(0.0) * 2;
var l = new Array();
ca = Math.cos(s.rotation * pi / 180.0);
sa = Math.sin(s.rotation * pi / 180.0);
i = 0;
for(x = 0; x < s.width; ++x) {
for(y = 0; y < s.height; ++y) {
// offset to center of sprite
if(s.pix[i++]==1) {
l1 = x - (s.width - 1) * 0.5;
l2 = y - (s.height - 1) * 0.5;
// rotate:
r1 = ca * l1 - sa * l2;
r2 = sa * l1 + ca * l2;
// add position:
p1 = r1 + s.x;
p2 = r2 + s.y;
console.log("rotated pixel [ " + x + "," + y + " ] is at ( " + p1 + "," + p2 + " ) " );
s.locx.push(p1);
s.locy.push(p2);
}
else console.log("no pixel at [" + x + "," + y + "]");
}
}
}
function collision(s1, s2) {
var i, j;
var dx, dy;
for (i = 0; i < s1.locx.length; i++) {
for (j = 0; j < s2.locx.length; j++) {
dx = Math.abs(s1.locx[i] - s2.locx[j]);
if(dx < 1) {
dy = Math.abs(s1.locy[i] - s2.locy[j]);
if (dx*dx + dy+dy < 1) return 1;
}
}
}
return 0;
}
</script>
</html>

Is it possible to run a fold operation on the GPU in a browser using WebGL?

I am running a data processing application that is pretty much:
var f = function(a,b){ /* any function of type int -> int -> int */ };
var g = function(a){ /* any function of type int -> int */ };
function my_computation(state){
var data = state[2];
for (var i=0,l=data.length,res=0; i<l; ++i)
res = f(res,g(data[i]));
state[3] = res;
return res;
}
This pattern is pretty much that of a foldl. That computation is not fast enough on CPU. Is it possible to somehow run that computation on the GPU, on the browser?
From your comment:
I don't know much about vertex shaders but to my knowledge it worked in isolated pixels, and for the folding you'd kinda need an accumulation pattern. No?
If you want to use WebGL for computation over an array, you most likely will want to do it in a fragment shader, not a vertex shader. If you use input geometry that covers the entire viewport, a fragment shader is then simply a program that computes an image pixel-by-pixel. It can use as inputs numeric parameters and arbitrary textures. Furthermore, you can render output to a texture.
This is how you do inputs: you stash the input data in a texture, and have the fragment shader do lookups in the texture. It's perfectly normal to do multiple offset lookups in a texture; for example, this is how a blur effect works.
You're right to be concerned about accumulation. There is no native way to do a fold over all pixels. However, if you can express your algorithm in a "map-reduce" fashion, where the reduce operation combines two outputs and doesn't care about whether they are the input from a previous reduce step, then you can do it like so:
Load your input data into a 1-pixel high by N-pixel wide texture. (Not sure whether using square textures might give better upper limits, but this is simpler to describe.)
Run your "map" (g, non-accumulating computation) shader program producing an intermediate-outputs texture.
Run a shader which performs the "reduce" operation (f) on each pair of adjacent pixels (or similar) of the intermediate texture, producing another texture half as wide.
Do the same thing again on that output.
This will get you your single answer in only O(log n) JavaScript operations.
I would say yes. I've often though about this myself. Your data would be attached as a vertex attribute buffer and a custom shader would execute you fold code, 'rendering' the results to an off-screen buffer. You would then read the result buffer back into CPU memory.
Given that you want to run it on the browser, you are limited by what WebGL/extensions support, specifically on CPU access to GPU data.
You can take a look at shader code for filters/edge detect in below code-base that show how you can do this in a fragment shader.
https://github.com/prabindh/sgxperf/blob/master/sgxperf_strings.cpp
After this, you can access the data using readPixels. NOTE - the fragment shader can only output fixed-point data.

Categories