i want to read back the data stored in the GL.bufferData array in javascript.
Here is my code
var TRIANGLE_VERTEX = geometryNode["triangle_buffer"];
GL.bindBuffer(GL.ARRAY_BUFFER, TRIANGLE_VERTEX);
GL.bufferData(GL.ARRAY_BUFFER,new Float32Array(vertices),GL.STATIC_DRAW);
is it possible in webgl to read back the bufferdata in GPU?
if possible then please explain me with a sample code.
How to know the memory size(filled and free) of the Gpu in webgl at run time and how to debug the shader code and data in GPU in webgl.
It is not directly possible to read the data back in WebGL1. (see below for WebGL2). This is limitation of OpenGL ES 2.0 on which WebGL is based.
There are some workarounds:
You could try to render that data to a texture then use readPixels to read the data.
You'd have to encode the data into bytes in your shader because readPixels in WebGL can only read bytes
You can wrap your WebGL to store the data yourself something like
var buffers = {};
var nextId = 1;
var targets = {};
function copyBuffer(buffer) {
// make a Uint8 view of buffer in case it's not already
var view = new Uint8Buffer(buffer.buffer);
// now copy it
return new UintBuffer(view);
}
gl.bindBuffer = function(oldBindBufferFn) {
return function(target, buffer) {
targets[target] = new Uint8Buffer(buffer.buffer);
oldBindBufferFn(target, buffer);
};
}(gl.bindBuffer.bind(gl));
gl.bufferData = function(oldBufferDataFn) {
return function(target, data, hint) {
var buffer = targets[target];
if (!buffer.id) {
buffer.id = nextId++;
}
buffers[buffer.id] = copyBuffer(data);
oldBufferDataFn(target, data, hint);
};
}(gl.bufferData.bind(gl)));
Now you can get the data with
data = buffers[someBuffer.id];
This is probably what the WebGL Inspector does
Note that there are a few issues with the code above. One it doesn't check for errors. Checking for errors would make it way slower but not checking for error will give you incorrect results if your code generates errors. A simple example
gl.bufferData(someBuffer, someData, 123456);
This would generate an INVALID_ENUM error and not update the data in someBuffer but our code isn't checking for errors so it would have put someData into buffers and if you read that data it wouldn't match what's in WebGL.
Note the code above is pseudo code. For example I didn't supply a wrapper for gl.bufferSubData.
WebGL2
In WebGL2 there is a function gl.getBufferSubData that will allow you to read the contents of a buffer. Note that WebGL2, even though it's based on OpenGL ES 3.0 does not support gl.mapBuffer because there is no performant and safe way to expose that function.
Related
I am working on a web 2d world generation and simulation. I generate a 2d terrain (different cell shapes and colors) using perlin noise and some random function in different web workers. Those workers feed a cache which contains the different chunks and their data. When a chunk receives data from its worker, it automatically updates its buffer:
(chunk) => {
this.webgl.bindBuffer(this.webgl.ARRAY_BUFFER, chunk.vertexBuffer);
this.webgl.bufferData(this.webgl.ARRAY_BUFFER, chunk.vertexes, this.webgl.STATIC_DRAW);
this.webgl.bindBuffer(this.webgl.ARRAY_BUFFER, chunk.colorBuffer);
this.webgl.bufferData(this.webgl.ARRAY_BUFFER, chunk.colors, this.webgl.STATIC_DRAW);
}
I also have a kind of game loop which calls itself at every requestAnimationFrame and draw its chunk.
cache.forEachChunk(center.current, scale.current, (chunk) => {
gl.bindBuffer(gl.ARRAY_BUFFER, chunk.colorBuffer);
const fillColor = gl.getAttribLocation(shaderProgram, "fillColor");
gl.enableVertexAttribArray(fillColor);
gl.vertexAttribPointer(fillColor, 3,
gl.FLOAT, false, 0, 0);
gl.bindBuffer(gl.ARRAY_BUFFER, chunk.vertexBuffer);
const position = gl.getAttribLocation(shaderProgram, "position");
gl.enableVertexAttribArray(position);
gl.vertexAttribPointer(position, 2,
gl.FLOAT, false, 0, 0);
if (chunk.vertexes)
gl.drawArrays(gl.TRIANGLES, 0, chunk.vertexes.length / 2);
})
This works very well ...except that there is a little freeze every time a chunk receives data from its workers: in fact calling bufferData is pretty slow but above all, it is blocking my application. Is there any way to do this in a web worker or any other strategy to be able to send data to my graphics card without blocking my whole application? Blocking it doesn't make sense: the graphics card should still be able to draw the other chunks without worry.
bufferData creates a completely new data store. Think of it as buffer allocation and initialization. If you just want to update an existing buffer, use bufferSubData (see also Buffer objects). bufferSubData simply writes the data into an existing buffer without re-creating the data store, so it's much faster. Of course, bufferSubData cannot change the size of the data memory and you have to "allocate" the buffer once with bufferData. However, there is no need to pass data to the buffer when you initialize it. This means that the data argument of bufferData can be null.
pretty new to ML and tensorflow!
I made an object detection model with http://cloud.annotations.ai that permits to train and convert a model in different formats, tfjs (model_web) too.
That website provides also boilerplates for running the model within a browser (react app)... just like you do - probably it is the same code, didn't spend enough time.
So I have this model running inside a browser, giving prediction about objects in a photo with pretty good results considering the amount of example I gave and the prediction score (0.89). the given bounding box is good too.
But, unfortunately, I didn't have "just one video" to analyze frame by frame inside a browser, I've got plenty of them. So I decided to switch to node.js, porting the code as is.
Guess what? TF.js relies on DOM and browser components, and almost none examples that works with Node exists. So not a big deal, just spent a morning figuring out all the missing parts.
Finally I'm able to run my model over videos that are splitted in frames, at a decent speed - although having the "Hello there, use tfjs-node to gain speed" banner when I'm already using tfjs-node - but results seems odd.
Comparing the same picture with the same model_web folder gave the same prediction but with lower score (0.80 instead of 0.89) and a different bounding box, with object not centered at all.
(TL;DR)
Does tfjs have different implementation of the libraries (tfjs and tfjs-node) that makes different use of the same model? I don't think it can be a problem of input because - after a long search and fight - i figure out two ways to give the image to tf.browser.getPixel in Node (and I'm still wondering why I have to use a "browser" method inside tfjs-node). Anyone made comparisons?
So... that's the code I used, for your reference:
model_web is being loaded with tf.loadGraphModel("file://path/to/model_web/model.json");
two different ways to convert a JPG and make it works with tf.browser.getPixel()
const inkjet = require('inkjet');
const {createCanvas, loadImage} = require('canvas');
const decodeJPGInkjet = (file) => {
return new Promise((rs, rj) => {
fs.readFile(file).then((buffer) => {
inkjet.decode(buffer, (err, decoded) => {
if (err) {
rj(err);
} else {
rs(decoded);
}
});
});
});
};
const decodeJPGCanvas = (file) => {
return loadImage(file).then((image) => {
const canvas = createCanvas(image.width, image.height);
const ctx = canvas.getContext('2d');
ctx.drawImage(image, 0, 0, image.width, image.height);
const data = ctx.getImageData(0, 0, image.width, image.height);
return {data: new Uint8Array(data.data), width: data.width, height: data.height};
});
};
and that's the code that use the loaded model to give predictions - same code for node and browser, found at https://github.com/cloud-annotations/javascript-sdk/blob/master/src/index.js - doesn't works on node as it is, I changed require("#tensorflow/tfjs"); with require("#tensorflow/tfjs-node"); and replaced fetch with fs.read
const runObjectDetectionPrediction = async (graph, labels, input) => {
const batched = tf.tidy(() => {
const img = tf.browser.fromPixels(input);
// Reshape to a single-element batch so we can pass it to executeAsync.
return img.expandDims(0);
});
const height = batched.shape[1];
const width = batched.shape[2];
const result = await graph.executeAsync(batched);
const scores = result[0].dataSync();
const boxes = result[1].dataSync();
// clean the webgl tensors
batched.dispose();
tf.dispose(result);
const [maxScores, classes] = calculateMaxScores(
scores,
result[0].shape[1],
result[0].shape[2]
);
const prevBackend = tf.getBackend();
// run post process in cpu
tf.setBackend("cpu");
const indexTensor = tf.tidy(() => {
const boxes2 = tf.tensor2d(boxes, [result[1].shape[1], result[1].shape[3]]);
return tf.image.nonMaxSuppression(
boxes2,
maxScores,
20, // maxNumBoxes
0.5, // iou_threshold
0.5 // score_threshold
);
});
const indexes = indexTensor.dataSync();
indexTensor.dispose();
// restore previous backend
tf.setBackend(prevBackend);
return buildDetectedObjects(
width,
height,
boxes,
maxScores,
indexes,
classes,
labels
);
};
Do different implementation of the libraries (tfjs and tfjs-node) that makes different use of the same model
If the same model is deployed both in the browser and in nodejs, the prediction will be the same thing.
If the predicted value are different, it might be related to the tensor used for the prediction. The processing from the image to the tensor might be different resulting in different tensors being used for the prediction thus causing the output to be different.
i figure out two ways to give the image to tf.browser.getPixel in Node (and I'm still wondering why I have to use a "browser" method inside tfjs-node)
The canvas package use the system graphic to create the browser like canvas environment that can be used by nodejs. This makes it possible to use tf.browser namespace especially when dealing with image conversion. However it is still possible to use directly nodejs buffer to create a tensor.
I am looking to implement my Keras model into my website with the Keras.js library. One problem with this library is that when inputting data in javascript, only a Float32Array() is allowed as an input. This type of array is 1D, and in my Keras model, the input is 3D. I have asked on the Keras.js issues page and found some potential solutions such as adding an embedding layer, but that requires a specific input shape, but I would like to have any 3D input work, as it did when I trained the model. The model structure is simple, there is an input layer which takes an array of dimension mxnx3 (it is an image of unknown size with r, g, and b channels) and a Conv2D layer, which then outputs an mxnx1 array. I know the model works because it can give a good output based on an input, so the only problem I have is with the transition to Keras.js. Here is the JS code that I have at the moment.
function predictImageWithCNN(data) { //'data' is mxnx3 array
var final = [];
//instantiating model from json and buf files
var model = new KerasJS.Model({
filepaths: {
model: 'dist/model.json',
weights: 'dist/model_weights.buf',
metadata: 'dist/model_metadata.json'
},
gpu: true //MAY NEED TO CHANGE (NOT SURE REALLY)
});
//Ready the model.
model.ready()
.then(function() {
//This matches our input data with the input key (b/c Sequential Model)
var inputData = {
'input_1': new Float32Array(data)
};
// make predictions based on inputData
return model.predict(inputData);
})
.then(function(outputData) {
//Here we take the outputData and parse it to get a result.
var out = outputData['output']; //Gets output data
console.log(out);
//TODO: Put in code to assign outputData to 'final' so we can then convert it
// This should not be too hard to do.
})
.catch(function(err) {
console.log(err);
// handle error
});
return final; // should return nxmx1 array of vals 0-1.
}
If anyone had any suggestions for how to resolve this, that would be very much appreciated. Thanks! :)
I had the same problem with an LSTM. The way I got around this was by training using a flattened version of the data but using a reshape layer as the first layer to get it to the shape I needed for my LSTM. eg.
model = Sequential()
model.add(Reshape((40,59),input_shape=(1,2360)))
model.add(LSTM(128, input_shape=(maxlen, len(chars))))
model.add(Dense(len(chars)))
model.add(Activation('softmax'))
optimizer = RMSprop(lr=0.01)
model.compile(loss='categorical_crossentropy', optimizer=optimizer)
Then in Keras.JS I can feed in the flattened version from Float32Array
I wonder, how can I obtain any WebGL program instance (WebGLProgram) from any desired WebGL context?
To fetch the WebGL context is NOT a problem. You are searching the DOM of the current page for the canvas element using document.getElementsByTagName() or document.getElementById(), if you know the exact canvas id:
let canvas = document.getElementById( "canvasId" );
let context = canvas.getContext( "webgl" );
Here we fetch the current context as I suppose, but if I want to get some shader parameters or get certain value from already running vertex/fragment shader - I need to have a WebGL program, which is associated with the current WebGL rendering context.
But I can't find any method in WebGL API like context.getAttachedProgram() or context.getActiveProgram().
So what is the way get the active WebGL program which is used for the rendering process?
Maybe, there is some special WebGL parameter?
There is no way to get all the programs or any other resources from a WebGL context. If the context is already existing the best you can do is look at the current resources with things like gl.getParameter(gl.CURRENT_PROGRAM) etc..
What you can do instead is wrap the WebGL context
var allPrograms = [];
someContext.createProgram = (function(oldFunc) {
return function() {
// call the real createProgram
var prg = oldFunc.apply(this, arguments);
// if a program was created save it
if (prg) {
allPrograms.push(prg);
}
return prg;
};
}(someContext.createProgram));
Of course you'd need to wrap gl.deleteProgram as well to remove things from the array of all programs.
someContext.deleteProgram = (function(oldFunc) {
return function(prg) {
// call the real deleteProgram
oldFunc.apply(this, arguments);
// remove the program from allPrograms
var ndx = allPrograms.indexOf(prg);
if (ndx >= 0) {
allPrograms.splice(ndx, 1);
}
};
}(someContext.deleteProgram));
These are the techniques used by things like the WebGL Inspector and the WebGL Shader Editor Extension.
If you want to wrap all contexts you can use a similar technique to wrap getContext.
HTMLCanvasElement.prototype.getContext = (function(oldFunc) {
return function(type) {
var ctx = oldFunc.apply(this, arguments);
if (ctx && (type === "webgl" || type === "experimental-webgl")) {
ctx = wrapTheContext(ctx);
}
return ctx;
};
}(HTMLCanvasElement.prototype.getContext));
gl.getParameter(gl.CURRENT_PROGRAM). Check out https://www.khronos.org/files/webgl/webgl-reference-card-1_0.pdf pg 2 to the right.
I'm using Emscripten to try to get an open source game to run in a browser. It compiles fine, loads all of its files any everything, but when I run it it get the following exception:
exception thrown: TypeError: surfData.colors32 is undefined,_SDL_FillRect#file:///home/misson20000/dev/js/game.js:9702:9
__ZN9Surface5ClearEhhh#file:///home/misson20000/dev/js/game.js:112026:3
...
_main#file:///home/misson20000/dev/js/game.js:10525:11
asm._main#file:///home/misson20000/dev/js/game.js:170793:10
callMain#file:///home/misson20000/dev/js/game.js:173065:15
doRun#file:///home/misson20000/dev/js/game.js:173122:7
run/<#file:///home/misson20000/dev/js/game.js:173134:7
The code that is calling SDL_FillRect (a simple clear function) follows:
SDL_FillRect(fSurface, NULL, MapColor(r, g, b));
MapColor is defined as
return SDL_MapRGB(fSurface->format, r, g, b);
Digging around in the source code for a bit reveals that the surface in question is a screen surface.
How can I made surfData.colors32 not be undefined?
The colors32 is used when you create SDL surface with SDL_HWPALETTE flag. To correctly use surface of this type you should call SDL_SetColors before SDL_FillRect. Take a look in src/library_sdl.js:
SDL_SetColors: function(surf, colors, firstColor, nColors) {
var surfData = SDL.surfaces[surf];
// we should create colors array
// only once cause client code
// often wants to change portion
// of palette not all palette.
if (!surfData.colors) {
var buffer = new ArrayBuffer(256 * 4); // RGBA, A is unused, but faster this way
surfData.colors = new Uint8Array(buffer);
surfData.colors32 = new Uint32Array(buffer);
}
//...
}