Performance Warning Processing.js - javascript

When I write Processing.js in the JavaScript-flavor I get a performance warning that I didn't get when I used Processing.js to parse Processing-Code. I've create a fairly simple sketch with 3d support to get into it and the console is flooded with this warning:
PERFORMANCE WARNING: Attribute 0 is disabled. This has signficant performance penalty
What does that mean? And even more importantly: how to fix it?
That's the sketch. (watch/edit on codepen.io)
var can = document.createElement("canvas");
var sketch = function(p){
p.setup = function(){
p.size(800, 600, p.OPENGL);
p.fill(170);
};
p.draw = function(){
p.pushMatrix();
p.translate(p.width/2, p.height/2);
p.box(50);
p.popMatrix();
};
};
document.body.appendChild(can);
new Processing(can, sketch);

This is an issue in Processing.js
For detailed explanation: OpenGL and OpenGL ES have attributes. All attributes can either fetch values from buffers or provide a constant value. Except, in OpenGL attribute 0 is special. It can not provide a constant value. It MUST get values from a buffer. WebGL though is based on OpenGL ES 2.0 which doesn't have this limitation.
So, when WebGL is running on top of OpenGL and the user does not use attribute 0 (it's set to use a constant value), WebGL has to make a temporary buffer, fill it with the constant value, and give it to OpenGL. This is slow. Hence the warning.
The issue in Processing is they have a single shader that handles multiple use cases. It has attributes for normals, positions, colors, and texture coordinates. Depending on what you ask Processing to draw it might not use all of these attributes. For example commonly it might not use normals. Normals are only needed in Processing to support lights so if you have no lights there are no normals (I'm guessing). In that case they turn off normals. Unfortunately if normals happens to be on attribute 0, in order for WebGL to render it has to make a temp buffer, fill it with a constant value, and then render.
The way around this is to always use attribute 0. In the case of Processing they will always use position data. So before linking their shaders they should call bindAttribLocation
// "aVertex" is the name of the attribute used for position data in
// Processing.js
gl.bindAttribLocation(program, 0, "aVertex");
This will make the attribute 'aVertex' always use attrib 0 and since for every use case they always use 'aVertex' they'll never get that warning.
Ideally you should always bind your locations. That way you don't have to query them after linking.

Related

GL ERROR :GL_INVALID_OPERATION : glDrawArrays: attempt to access out of range vertices in attribute 1, three.js

This has had me beat for a while now. I'm making a game, and the main map is a model using the obj format. I load it like this:
var objLoader = new THREE.OBJLoader();
objLoader.setPath('Assets/');
objLoader.load('prison.obj', function(prison){
prison.rotation.x = Math.PI / 2;
prison.position.z += 0.1;
prison.scale.set(15, 15, 15)
scene.add(prison);
});
So when I was loading the same model, but smaller, it worked normally. But, now I have added more to the model, and it is much bigger. WebGL starts giving me this warning: [.WebGL-0x7fb8de02fe00]GL ERROR :GL_INVALID_OPERATION : glDrawArrays: attempt to access out of range vertices in attribute 1. This warning happens 256 times before WebGL says WebGL: too many errors, no more errors will be reported to the console for this context.
And with this warning, my model doesn't load completely. In Preview, I see the model as this, as expected:
But in Three.js, I see something different:
Well, I'm not exactly sure what's wrong here:
Maybe because I'm using OBJLoader CDN
Maybe my model size is too large
Maybe I have no idea what I'm doing
Any help is appreciated, thanks. Let me know if you need more detail.
This error is telling you that your geometry attributes count don't match up. For example, your geometry could have:
100 vertex positions
99 vertex normals
99 vertex UVs
... or something of that nature. When looking up info on that 100th vertex, it says "attempt to access out-of-range vertices"
Ideally, you'd want to re-export the OBJ asset so you don't have to manually find the geometry that's causing the problem. However, in case you cannot get a new OBJ, you could either:
prevent the problem geometry from rendering with mesh.visibility = false
Fix the geometry attribute count. To fix it, you'll have to find which attribute is short:
// We assume you already found the mesh with the problem.
const problemGeometry = mesh.geometry;
// Now we dig through the console to see each attribute.
// Look up each attribute.count property to see which one is short.
console.log(problemGeometry.attributes);
// Then you'll have to set the draw range to the lowest of these counts
// Here I assume the lowest is 99
problemGeometry.setDrawRange(0, 99);
Don't forget to also look at the geometry.index attribute, if your geometry has it. That should fix your geometry to render with the lowest common number of attributes. See here for info on setDrawRange

Getting WebGLTexture by texture pointer

I'm working on a project using unity WebGL.
What I need to do is to display the game scene in another browser window.
So what I tried was I rendered the scene to RenderTexture, and send the texture pointer (from GetNativeTexturePtr()) to the browser side.
When I send the texture pointer I used this jsdll function like:
ExtractWindow: function (windowId, w, h, texturePtr) {
ViewerManager.OnExtractWindow(windowId, w, h, GLctx, GL.textures[texturePtr]);
}
I used GL.textures[texturePtr] because I saw it in https://docs.unity3d.com/Manual/webgl-interactingwithbrowserscripting.html.
I think it's supposed to return the WebGLTexture object, but it's returning null.
This is the only way I know to get WebGLTexture (I'm pretty much beginner in WebGL and Javascript). I'm not even sure if GL.texutre[] is a unity method or WebGL method.
Anybody know how the "GL.texture[]" works? Or is there another way to get a WebGLTexture by texture pointer?
Thanks for reading my question.
Answer to my own question.
The reason why "GL.textures[texturePtr]" was returning null was because the texturePtr was 0. I found that GetNativeTexturePtr() sometimes works and sometimes just returns 0, I'm not sure why it's working like that. But the workaround I found is to call "colorBuffer" property of the render texture before calling GetNativeTexturePtr(), then it returns the proper ptr.
var cb = _renderTexture.colorBuffer;
var texId = _renderTexture.GetNativeTexturePtr().ToInt64();
// texId is not 0 anymore.
The constructor alone for a RenderTexture doesn't create the underlying hardware object, so the underlying native ptr will be null until something forces it to be created.
Check out https://docs.unity3d.com/ScriptReference/RenderTexture.Create.html
Try calling _renderTexture.Create() before _renderTexture.GetNativeTexturePtr().
I'm not certain, but trying to access the color buffer on a render texture that hasn't been created yet might create it under the hood.
Cheers

HTML5 Canvas - Scaling relative to the center of an object without translating context

I'm working on a canvas game that has several particle generators. The particles gradually scale down after being created. To scale the particles down from their center points I am using the context.translate() method:
context.save();
context.translate(particle.x, particle.y);
context.translate(-particle.width/2, -particle.height/2);
context.drawImage(particle.image, 0, 0, particle.width, particle.height);
context.restore();
I've read several sources that claim the save(), translate() and restore() methods are quite computationally expensive. Is there an alternative method I can use?
My game is targeted at mobile browsers so I am trying to optimize for performance as much as possible.
Thanks in advance!
Yes, just use setTransform() at the end instead of using save/restore:
//context.save();
context.translate(particle.x, particle.y);
context.translate(-particle.width/2, -particle.height/2);
context.drawImage(particle.image, 0, 0, particle.width, particle.height);
//context.restore();
context.setTransform(1,0,0,1,0,0); // reset matrix
Assuming there are no other accumulated transform in use (in which case you could refactor the code to set absolute transforms where needed).
The numbers given as argument are numbers representing an identity matrix, ie. a reset matrix state.
This is much faster than the save/restore approach which stores and restores not only transform state, but style settings, shadow settings, clipping area and what have you.
You could also combine the two translation calls into a single call, and use multiply instead of divide (which is much faster at CPU level):
context.translate(particle.x-particle.width*0.5, particle.y-particle.height*0.5);
or simply use the x/y coordinate directly with the particle "shader" without translating at all.

Is it possible to run a fold operation on the GPU in a browser using WebGL?

I am running a data processing application that is pretty much:
var f = function(a,b){ /* any function of type int -> int -> int */ };
var g = function(a){ /* any function of type int -> int */ };
function my_computation(state){
var data = state[2];
for (var i=0,l=data.length,res=0; i<l; ++i)
res = f(res,g(data[i]));
state[3] = res;
return res;
}
This pattern is pretty much that of a foldl. That computation is not fast enough on CPU. Is it possible to somehow run that computation on the GPU, on the browser?
From your comment:
I don't know much about vertex shaders but to my knowledge it worked in isolated pixels, and for the folding you'd kinda need an accumulation pattern. No?
If you want to use WebGL for computation over an array, you most likely will want to do it in a fragment shader, not a vertex shader. If you use input geometry that covers the entire viewport, a fragment shader is then simply a program that computes an image pixel-by-pixel. It can use as inputs numeric parameters and arbitrary textures. Furthermore, you can render output to a texture.
This is how you do inputs: you stash the input data in a texture, and have the fragment shader do lookups in the texture. It's perfectly normal to do multiple offset lookups in a texture; for example, this is how a blur effect works.
You're right to be concerned about accumulation. There is no native way to do a fold over all pixels. However, if you can express your algorithm in a "map-reduce" fashion, where the reduce operation combines two outputs and doesn't care about whether they are the input from a previous reduce step, then you can do it like so:
Load your input data into a 1-pixel high by N-pixel wide texture. (Not sure whether using square textures might give better upper limits, but this is simpler to describe.)
Run your "map" (g, non-accumulating computation) shader program producing an intermediate-outputs texture.
Run a shader which performs the "reduce" operation (f) on each pair of adjacent pixels (or similar) of the intermediate texture, producing another texture half as wide.
Do the same thing again on that output.
This will get you your single answer in only O(log n) JavaScript operations.
I would say yes. I've often though about this myself. Your data would be attached as a vertex attribute buffer and a custom shader would execute you fold code, 'rendering' the results to an off-screen buffer. You would then read the result buffer back into CPU memory.
Given that you want to run it on the browser, you are limited by what WebGL/extensions support, specifically on CPU access to GPU data.
You can take a look at shader code for filters/edge detect in below code-base that show how you can do this in a fragment shader.
https://github.com/prabindh/sgxperf/blob/master/sgxperf_strings.cpp
After this, you can access the data using readPixels. NOTE - the fragment shader can only output fixed-point data.

How does 2D drawing frameworks such as Pixi.js make canvas drawing faster?

I found a bunnymark for Javascript canvas here.
Now of course, I understand their default renderer is using webGL but I am only interested in the native 2D context performance for now. I disabled webGL on firefox and after spawning 16500 bunnies, the counter showed a FPS of 25. I decided to wrote my own little very simple rendering loop to see how much overhead Pixi added. To my surprise, I only got a FPS of 20.
My roughly equivalent JSFiddle.
So I decided to take a look into their source here and it doesn't appear to be that the magic is in their rendering code:
do
{
transform = displayObject.worldTransform;
...
if(displayObject instanceof PIXI.Sprite)
{
var frame = displayObject.texture.frame;
if(frame)
{
context.globalAlpha = displayObject.worldAlpha;
context.setTransform(transform[0], transform[3], transform[1], transform[4], transform[2], transform[5]);
context.drawImage(displayObject.texture.baseTexture.source,
frame.x,
frame.y,
frame.width,
frame.height,
(displayObject.anchor.x) * -frame.width,
(displayObject.anchor.y) * -frame.height,
frame.width,
frame.height);
}
}
Curiously, it seems they are using a linked list for their rendering loop and a profile on both app shows that while my version allocates the same amount of cpu time per frame, their implementation shows cpu usage in spikes.
My knowledge ends here unfortunately and I am curious if anyone can shed some light on whats going on.
I think, in my opinion, that it boils down to how "compilable" (cache-able) the code is. Chrome and Firefox uses two different JavaScript "compilers"/engines as we know which optimizes and caching code differently.
Canvas operations
Using transform versus direct coordinates should not have an impact as setting a transform merely updates the matrix which is in any case is used with what-ever is in it.
The type of position values can affect performance though, float versus integer values, but as both your fiddle and PIXI seem to use floats only this is not the key here.
So here I don't think canvas is the cause of the difference.
Variable and property caching
(I got unintentionally too focused on the prototypal aspect in the first version of this answer. The essence I was trying to get at was mainly object traversing, so here the following text is re-worded a bit -)
PIXI uses object properties as the fiddle but these custom objects in PIXI are smaller in size so the traversing of the object tree takes less time compared to what it takes to traverse a larger object such as canvas or image (a property such as width would also be at the end of this object).
It's a well known classic optimization trick to cache variables due to this very reason (traverse time). The effect is less today as the engines has become smarter, especially V8 in Chrome which seem to be able to predict/cache this better internally, while in Firefox it seem to still have a some impact not to cache these variables in code.
Does it matter performance-wise? For short operations very little, but drawing 16,500 bunnies onto canvas is demanding and do gain a benefit from doing this (in FF) so any micro-optimization do actually count in situations such as this.
Demos
I prototyped the "renderer" to get even closer to PIXI as well as caching the object properties. This gave a performance burst in Firefox:
http://jsfiddle.net/AbdiasSoftware/2Dbys/8/
I used a slow computer (to scale the impact) which ran your fiddle at about 5 FPS. After caching the values it ran at 6-7 fps which is more than 20% increase on this computer showing it do have an effect. On a computer with a larger CPU instruction cache and so forth the effect may be less, but it's there as this is related to the FF engine itself (disclaimer: I am not claiming this to be a scientific test however, only a pointer :-) ).
/// cache object properties
var lastTime = 0,
w = canvas.width,
h = canvas.height,
iw = image.width,
ih = image.height;
This next version caches these variables as properties on an object (itself) to show that also this improves performance compared to using large global objects directly - result about the same as above:
http://jsfiddle.net/AbdiasSoftware/2Dbys/9/
var RENDER = function () {
this.width = canvas.width;
this.height = canvas.height;
this.imageWidth = image.width;
this.imageHeight = image.height;
}
In conclusion
I am certain based on the results and previous experience that PIXI can run the code faster due to using custom small-sized objects rather than getting the properties directly from large objects (elements) such as canvas and image.
The FF engine seem not yet to be as "smart" as the V8 engine in regard to object traversing of tree and branches so caching variables do have an impact in FF which comes to display when the demand is high (such as when drawing 16,500 bunnies per "frame").
One difference I noticed between your version and Pixi's is this:
You render image at certain coordinates by passing x/y straight to drawImage function:
drawImage(img, x, y, ...);
..whereas Pixi translates entire canvas context, and then draws image at 0/0 (of already shifted context):
setTransform(1, 0, 0, 1, x, y);
drawImage(img, 0, 0, ...);
They also pass more arguments to drawImage; arguments that control "destination rectangle" — dx, dy, dw, dh.
I suspected this is where speed difference hides. However, changing your test to use same "technique" doesn't really make things better.
But there's something else...
I clocked bunnies to 5000, disabled WebGL, and Pixi actually performs worse than the custom fiddle version.
I get ~27 FPS on Pixi:
and ~32-35 FPS on Fiddle:
This is all on Chrome 33.0.1712.4 dev, Mac OS X.
I'd suspect that this is some canvas compositing issue. Canvas is transparent by default, so the page background needs to be combined with the canvas contents...
I found this in their source...
// update the background color
if (this.view.style.backgroundColor != stage.backgroundColorString &&
!this.transparent) {
this.view.style.backgroundColor = stage.backgroundColorString;
}
Maybe they set the canvas to be opaque for this demo (the fiddle doesn't really work for me, seems like most of the bunnys jump out with an extremely large dt most of the time)?
I don't think it's an object property access timing / compilability thing: The point is valid, but I don't think it can explain that much of a difference.

Categories