Getting WebGLTexture by texture pointer - javascript

I'm working on a project using unity WebGL.
What I need to do is to display the game scene in another browser window.
So what I tried was I rendered the scene to RenderTexture, and send the texture pointer (from GetNativeTexturePtr()) to the browser side.
When I send the texture pointer I used this jsdll function like:
ExtractWindow: function (windowId, w, h, texturePtr) {
ViewerManager.OnExtractWindow(windowId, w, h, GLctx, GL.textures[texturePtr]);
}
I used GL.textures[texturePtr] because I saw it in https://docs.unity3d.com/Manual/webgl-interactingwithbrowserscripting.html.
I think it's supposed to return the WebGLTexture object, but it's returning null.
This is the only way I know to get WebGLTexture (I'm pretty much beginner in WebGL and Javascript). I'm not even sure if GL.texutre[] is a unity method or WebGL method.
Anybody know how the "GL.texture[]" works? Or is there another way to get a WebGLTexture by texture pointer?
Thanks for reading my question.

Answer to my own question.
The reason why "GL.textures[texturePtr]" was returning null was because the texturePtr was 0. I found that GetNativeTexturePtr() sometimes works and sometimes just returns 0, I'm not sure why it's working like that. But the workaround I found is to call "colorBuffer" property of the render texture before calling GetNativeTexturePtr(), then it returns the proper ptr.
var cb = _renderTexture.colorBuffer;
var texId = _renderTexture.GetNativeTexturePtr().ToInt64();
// texId is not 0 anymore.

The constructor alone for a RenderTexture doesn't create the underlying hardware object, so the underlying native ptr will be null until something forces it to be created.
Check out https://docs.unity3d.com/ScriptReference/RenderTexture.Create.html
Try calling _renderTexture.Create() before _renderTexture.GetNativeTexturePtr().
I'm not certain, but trying to access the color buffer on a render texture that hasn't been created yet might create it under the hood.
Cheers

Related

Blur on three.js reflector object

I recently started working on a configurator tool, and I need a reflective surface on the ground to reflect the object we want to showcase and create a studio-like appearance.
The problem is that Three.js has a version of reflector, but its a 1:1 reflection and it looks weird since its too reflective. The question is if anyone knows how I can make that reflected image slightly more blurry. I'm not that well versed in shaders...
Alternatively, if there are better ways I am open to changing my approach. I am looking for the most performant way of getting a similar result to a car showcase. Image for reference
https://comps.canstockphoto.com/3d-red-hot-rod-drawing_csp0608954.jpg
Link to the file I am using.
https://github.com/mrdoob/three.js/blob/master/examples/jsm/objects/Reflector.js
and the code I use to place it in the scene:
var mirrorGeometry = new THREE.CircleGeometry(200, 200);
var groundMirror = new Reflector(mirrorGeometry, {
clipBias: 0.05,
textureWidth: window.innerWidth * window.devicePixelRatio,
textureHeight: window.innerHeight * window.devicePixelRatio,
color: 0x777777,
recursion: 1
});
scene.add(groundMirror);
tl;dr: Object is placed on a mirror, causing an identical reflection. Which looks weird for a showcase model. I am asking for information on how to blur it, but a different approach would also work for me.
Edit: Apparently the Reflector object can also not receive shadows (though I need to look into this first. This is also a factor in the final render)

Instancing Multiple Objects Lowers Framerate of WEBGL Application

I am developing a THREE.JS WebGL application where I need to render multiple objects with the same geometry and I've stumbled upon a bottleneck. It seems that my instancing of objects has some issue, that I can't really understand/realize, maybe someone can help me with that. For context, I have a PointCloud with normals, that gives me information about where to position my instanced objects, and also the orientation of the object through the normal quaternion. Then, I loop through this array, and place each instanced object accordingly. After looking at various posts about instancing, merging, etc, I can't figure out what I'm doing wrong.
I attach the code snippet of the method in question :
bitbucket.org/snippets/electricganesha/Mdddz
After reviewing it multiple times, I'm really wondering what is wrong here, and why does this particular method slow down my application from 60fps to 20fps.
You might be overcompensating with the optimization.
In your loop where you merge all these geometries try to add something like this
var maxVerts = 1 << 16;
//if merging a new object causes the vert number to go over 2^16 push the merged geometry somewhere, and make a new one for the next batch
if( singleGeometry.vertices.length + newObject.geometry.vertices.length > maxVerts ){
scene.add(singleGeometry);
singleGeometry = new Geometry();
}
singleGeometry.merge(newObject.geometry, newObject.matrix);

How does 2D drawing frameworks such as Pixi.js make canvas drawing faster?

I found a bunnymark for Javascript canvas here.
Now of course, I understand their default renderer is using webGL but I am only interested in the native 2D context performance for now. I disabled webGL on firefox and after spawning 16500 bunnies, the counter showed a FPS of 25. I decided to wrote my own little very simple rendering loop to see how much overhead Pixi added. To my surprise, I only got a FPS of 20.
My roughly equivalent JSFiddle.
So I decided to take a look into their source here and it doesn't appear to be that the magic is in their rendering code:
do
{
transform = displayObject.worldTransform;
...
if(displayObject instanceof PIXI.Sprite)
{
var frame = displayObject.texture.frame;
if(frame)
{
context.globalAlpha = displayObject.worldAlpha;
context.setTransform(transform[0], transform[3], transform[1], transform[4], transform[2], transform[5]);
context.drawImage(displayObject.texture.baseTexture.source,
frame.x,
frame.y,
frame.width,
frame.height,
(displayObject.anchor.x) * -frame.width,
(displayObject.anchor.y) * -frame.height,
frame.width,
frame.height);
}
}
Curiously, it seems they are using a linked list for their rendering loop and a profile on both app shows that while my version allocates the same amount of cpu time per frame, their implementation shows cpu usage in spikes.
My knowledge ends here unfortunately and I am curious if anyone can shed some light on whats going on.
I think, in my opinion, that it boils down to how "compilable" (cache-able) the code is. Chrome and Firefox uses two different JavaScript "compilers"/engines as we know which optimizes and caching code differently.
Canvas operations
Using transform versus direct coordinates should not have an impact as setting a transform merely updates the matrix which is in any case is used with what-ever is in it.
The type of position values can affect performance though, float versus integer values, but as both your fiddle and PIXI seem to use floats only this is not the key here.
So here I don't think canvas is the cause of the difference.
Variable and property caching
(I got unintentionally too focused on the prototypal aspect in the first version of this answer. The essence I was trying to get at was mainly object traversing, so here the following text is re-worded a bit -)
PIXI uses object properties as the fiddle but these custom objects in PIXI are smaller in size so the traversing of the object tree takes less time compared to what it takes to traverse a larger object such as canvas or image (a property such as width would also be at the end of this object).
It's a well known classic optimization trick to cache variables due to this very reason (traverse time). The effect is less today as the engines has become smarter, especially V8 in Chrome which seem to be able to predict/cache this better internally, while in Firefox it seem to still have a some impact not to cache these variables in code.
Does it matter performance-wise? For short operations very little, but drawing 16,500 bunnies onto canvas is demanding and do gain a benefit from doing this (in FF) so any micro-optimization do actually count in situations such as this.
Demos
I prototyped the "renderer" to get even closer to PIXI as well as caching the object properties. This gave a performance burst in Firefox:
http://jsfiddle.net/AbdiasSoftware/2Dbys/8/
I used a slow computer (to scale the impact) which ran your fiddle at about 5 FPS. After caching the values it ran at 6-7 fps which is more than 20% increase on this computer showing it do have an effect. On a computer with a larger CPU instruction cache and so forth the effect may be less, but it's there as this is related to the FF engine itself (disclaimer: I am not claiming this to be a scientific test however, only a pointer :-) ).
/// cache object properties
var lastTime = 0,
w = canvas.width,
h = canvas.height,
iw = image.width,
ih = image.height;
This next version caches these variables as properties on an object (itself) to show that also this improves performance compared to using large global objects directly - result about the same as above:
http://jsfiddle.net/AbdiasSoftware/2Dbys/9/
var RENDER = function () {
this.width = canvas.width;
this.height = canvas.height;
this.imageWidth = image.width;
this.imageHeight = image.height;
}
In conclusion
I am certain based on the results and previous experience that PIXI can run the code faster due to using custom small-sized objects rather than getting the properties directly from large objects (elements) such as canvas and image.
The FF engine seem not yet to be as "smart" as the V8 engine in regard to object traversing of tree and branches so caching variables do have an impact in FF which comes to display when the demand is high (such as when drawing 16,500 bunnies per "frame").
One difference I noticed between your version and Pixi's is this:
You render image at certain coordinates by passing x/y straight to drawImage function:
drawImage(img, x, y, ...);
..whereas Pixi translates entire canvas context, and then draws image at 0/0 (of already shifted context):
setTransform(1, 0, 0, 1, x, y);
drawImage(img, 0, 0, ...);
They also pass more arguments to drawImage; arguments that control "destination rectangle" — dx, dy, dw, dh.
I suspected this is where speed difference hides. However, changing your test to use same "technique" doesn't really make things better.
But there's something else...
I clocked bunnies to 5000, disabled WebGL, and Pixi actually performs worse than the custom fiddle version.
I get ~27 FPS on Pixi:
and ~32-35 FPS on Fiddle:
This is all on Chrome 33.0.1712.4 dev, Mac OS X.
I'd suspect that this is some canvas compositing issue. Canvas is transparent by default, so the page background needs to be combined with the canvas contents...
I found this in their source...
// update the background color
if (this.view.style.backgroundColor != stage.backgroundColorString &&
!this.transparent) {
this.view.style.backgroundColor = stage.backgroundColorString;
}
Maybe they set the canvas to be opaque for this demo (the fiddle doesn't really work for me, seems like most of the bunnys jump out with an extremely large dt most of the time)?
I don't think it's an object property access timing / compilability thing: The point is valid, but I don't think it can explain that much of a difference.

Performance Warning Processing.js

When I write Processing.js in the JavaScript-flavor I get a performance warning that I didn't get when I used Processing.js to parse Processing-Code. I've create a fairly simple sketch with 3d support to get into it and the console is flooded with this warning:
PERFORMANCE WARNING: Attribute 0 is disabled. This has signficant performance penalty
What does that mean? And even more importantly: how to fix it?
That's the sketch. (watch/edit on codepen.io)
var can = document.createElement("canvas");
var sketch = function(p){
p.setup = function(){
p.size(800, 600, p.OPENGL);
p.fill(170);
};
p.draw = function(){
p.pushMatrix();
p.translate(p.width/2, p.height/2);
p.box(50);
p.popMatrix();
};
};
document.body.appendChild(can);
new Processing(can, sketch);
This is an issue in Processing.js
For detailed explanation: OpenGL and OpenGL ES have attributes. All attributes can either fetch values from buffers or provide a constant value. Except, in OpenGL attribute 0 is special. It can not provide a constant value. It MUST get values from a buffer. WebGL though is based on OpenGL ES 2.0 which doesn't have this limitation.
So, when WebGL is running on top of OpenGL and the user does not use attribute 0 (it's set to use a constant value), WebGL has to make a temporary buffer, fill it with the constant value, and give it to OpenGL. This is slow. Hence the warning.
The issue in Processing is they have a single shader that handles multiple use cases. It has attributes for normals, positions, colors, and texture coordinates. Depending on what you ask Processing to draw it might not use all of these attributes. For example commonly it might not use normals. Normals are only needed in Processing to support lights so if you have no lights there are no normals (I'm guessing). In that case they turn off normals. Unfortunately if normals happens to be on attribute 0, in order for WebGL to render it has to make a temp buffer, fill it with a constant value, and then render.
The way around this is to always use attribute 0. In the case of Processing they will always use position data. So before linking their shaders they should call bindAttribLocation
// "aVertex" is the name of the attribute used for position data in
// Processing.js
gl.bindAttribLocation(program, 0, "aVertex");
This will make the attribute 'aVertex' always use attrib 0 and since for every use case they always use 'aVertex' they'll never get that warning.
Ideally you should always bind your locations. That way you don't have to query them after linking.

GLGE API setRot/setRotX doesn't work

I'm trying to make a little scene for viewing 3D models.
I modified the GLGE Collada example to add a .dae model from code.
http://goleztrol.nl/SO/GLGE/01/
What I've got
So far it works. The camera is rotated using an animation.
Using the buttons 'Add' and 'Remove' the model is added and removed from the scene, using the following code (Don't mind 'duck'. It was a duck in the original example.)
var duck = null;
function addDuck()
{
if (duck) return;
duck = new GLGE.Collada();
doc.getElement("mainscene").addCollada(duck);
duck.setId("duck");
duck.setDocument("amyrose.dae");
duck.setLocY(-15);
duck.setRotX(1);
duck.setScale(2);
}
function removeDuck()
{
if (!duck) return;
doc.getElement("mainscene").removeChild(duck);
duck = null;
}
Problem
Now the model is lying down, while it should stand up. The various methods of the element seem to work. The location is set, and the scale is set, but the call to setRotX seems to be ignored. I tried various others methods from the api, but setRotY, setRot, setQuatX and setDRotX all seem to fail. I don't get any errors (well not about this method). I tried values of 1.57 (which should be about 90 degrees), but other values as well, ranging from 1 to 180.
I can't find out what I'm doing wrong. Of course I could rotate the model itself in Blender, but I'd like to do it using the GLGE API.
Update
When I load the demo-model, seymourplane_triangulate.dae, the rotation works. Apparently my model differs in that it cannot be rotated. I just don't understand why. I figured it may be because the model is built of various separate meshes, but I don't understand why scaling and moving does work.
Does anyone know what's wrong with this model, and what I could do to fix it (maybe using Blender)?
Setting an initial rotation in the XML file that contains the scene does work. Setting rotation on another element (like the whole scene) works as well.
You need to rotate it after it has been loaded.
You can do this in the callback to setDocument
duck.setDocument("amyrose.dae", null, function() {
duck.setLocY(-15);
duck.setScale(2);
duck.setRotX(0);
duck.setRotY(0);
duck.setRotZ(3);
});

Categories