I am working on a web 2d world generation and simulation. I generate a 2d terrain (different cell shapes and colors) using perlin noise and some random function in different web workers. Those workers feed a cache which contains the different chunks and their data. When a chunk receives data from its worker, it automatically updates its buffer:
(chunk) => {
this.webgl.bindBuffer(this.webgl.ARRAY_BUFFER, chunk.vertexBuffer);
this.webgl.bufferData(this.webgl.ARRAY_BUFFER, chunk.vertexes, this.webgl.STATIC_DRAW);
this.webgl.bindBuffer(this.webgl.ARRAY_BUFFER, chunk.colorBuffer);
this.webgl.bufferData(this.webgl.ARRAY_BUFFER, chunk.colors, this.webgl.STATIC_DRAW);
}
I also have a kind of game loop which calls itself at every requestAnimationFrame and draw its chunk.
cache.forEachChunk(center.current, scale.current, (chunk) => {
gl.bindBuffer(gl.ARRAY_BUFFER, chunk.colorBuffer);
const fillColor = gl.getAttribLocation(shaderProgram, "fillColor");
gl.enableVertexAttribArray(fillColor);
gl.vertexAttribPointer(fillColor, 3,
gl.FLOAT, false, 0, 0);
gl.bindBuffer(gl.ARRAY_BUFFER, chunk.vertexBuffer);
const position = gl.getAttribLocation(shaderProgram, "position");
gl.enableVertexAttribArray(position);
gl.vertexAttribPointer(position, 2,
gl.FLOAT, false, 0, 0);
if (chunk.vertexes)
gl.drawArrays(gl.TRIANGLES, 0, chunk.vertexes.length / 2);
})
This works very well ...except that there is a little freeze every time a chunk receives data from its workers: in fact calling bufferData is pretty slow but above all, it is blocking my application. Is there any way to do this in a web worker or any other strategy to be able to send data to my graphics card without blocking my whole application? Blocking it doesn't make sense: the graphics card should still be able to draw the other chunks without worry.
bufferData creates a completely new data store. Think of it as buffer allocation and initialization. If you just want to update an existing buffer, use bufferSubData (see also Buffer objects). bufferSubData simply writes the data into an existing buffer without re-creating the data store, so it's much faster. Of course, bufferSubData cannot change the size of the data memory and you have to "allocate" the buffer once with bufferData. However, there is no need to pass data to the buffer when you initialize it. This means that the data argument of bufferData can be null.
Related
I want to detect when rendering completed. I tried to use the following way,
scene.add( mesh );
render();
mesh.onBeforeRender = function(renderer, scene){
...
}
mesh.onAfterRender = function(renderer, scene){
...
}
however, onBeforeRender/ onAfterRender were repeated and continued for my object(mesh) in my case, (maybe I use the mesh has some materials, and I use requestAnimationFrame), and I could not find the finishing the one object of render completed.
Is there any way to find the finishing render?
Similar questions are:
THREE.js static scene, detect when webgl render is complete
Three.js render complete
As I said in my comment, drawing is usually complete after the call to WebGLRenderer.render.
Things that can make it seem like that's not the case include:
Not all shapes are loaded (especially true if you're loading files)
Your scene is very large and complex, requiring a lot of GPU processing
Unfortunately, there is nothing built-in that registers that the GPU has finished creating the color buffer, or even that the canvas was updated.
The best your could do--and it would be computationally expensive--would be to store the last frame's data, and compare it against the current frame.
const buffer = new Uint8Array( canvasWidth * canvasHeight * 4 )
renderer.context.readPixels( 0, 0, canvasWidth, canvasHeight, renderer.context.RGBA, renderer.context.UNSIGNED_BYTE, buffer )
Now, you might miss collecting the pixel data, too. If the renderer clears the color buffer before you get to call readPixels, then your buffer may end up filled with zeroes.
I am trying to make a PDF of a reasonable amount of text and graphs in my html using html2pdf. So far so good, the PDF gets made and it looks fine. It is about 20 pages.
However, multiple graphs are missing. Some relevant information:
Most of the missing graphs are at the end of the document. However, the final graph is rendered, so it is not an explicit cut off at the end
The text accompanying the graphs is there, but the graph itself is not
The missing graphs are all of the same type. Other graphs of this type are rendered and look fine. It does not seem to be a problem with the type
If I reduce the scale on the html2canvas configuration to about 0.8, every graph gets rendered (but of course quality is reduced). I'd like the scale to be 2.
The fact that scale influences whether they are rendered or not, gives me the idea that something like timing / timeouts are a problem here. Larger scale means obviously longer rendering time, but it does not seem to wait for it to be done. Or something like that.
Below the majority of the code that makes the PDF.
The onClone is necessary for the graphs to be rendered correctly. If it is removed, the problem described above still occurs (but the graphs that áre rendered are ugly).
const options = {
filename: 'test.pdf',
margin: [15, 0, 15, 0],
image: { type: 'jpeg', quality: 1 }
html2canvas: {
scale: 2,
scrollY: 0,
onclone: (element) => {
const svgElements = element.body.querySelectorAll('svg');
Array.from(svgElements).forEach((item: any) => {
item.setAttribute('width', item.getBoundingClientRect().width.toString());
item.setAttribute('height', item.getBoundingClientRect().height.toString());
item.style.width = null;
item.style.height = null;
});
}
},
jsPDF: { orientation: 'portrait', format: 'a4' }
};
setTimeout(() => {
const pdfElement = document.getElementById('contentToConvert');
html2pdf().from(pdfElement).set(options).save()
.catch((err) => this.errorHandlerService.handleError(err))
}, 100);
It sounds like you may be exceeding the maximum canvas size on your browser. This varies by browser (and browser version). Try the demo from here to check out your browser's limit. If you can find 2 browsers with different limits (on my desktop, Safari and Chrome have the same, but the max area in FireFox is a bit lower - iPhone area much lower in Safari), try pushing your scale down on the one with the larger limit until it succeeds, and then see if that fails on the one with the lower limits. There are other limits in your browser (eg. max heap size) which may come into play. If this is the case, I don't have a good solution for you - its usually impractical to get clients to all reconfigure their browsers (and most of these limits are hard anyway). For obvious reasons, browsers don't allow the website to make arbitrary changes to memory limits. If you are using Node.js, you may have more success in dealing with memory limits. Either way (Node or otherwise), it's sometimes better to send things back to the server when you are pushing the limits of the client.
pretty new to ML and tensorflow!
I made an object detection model with http://cloud.annotations.ai that permits to train and convert a model in different formats, tfjs (model_web) too.
That website provides also boilerplates for running the model within a browser (react app)... just like you do - probably it is the same code, didn't spend enough time.
So I have this model running inside a browser, giving prediction about objects in a photo with pretty good results considering the amount of example I gave and the prediction score (0.89). the given bounding box is good too.
But, unfortunately, I didn't have "just one video" to analyze frame by frame inside a browser, I've got plenty of them. So I decided to switch to node.js, porting the code as is.
Guess what? TF.js relies on DOM and browser components, and almost none examples that works with Node exists. So not a big deal, just spent a morning figuring out all the missing parts.
Finally I'm able to run my model over videos that are splitted in frames, at a decent speed - although having the "Hello there, use tfjs-node to gain speed" banner when I'm already using tfjs-node - but results seems odd.
Comparing the same picture with the same model_web folder gave the same prediction but with lower score (0.80 instead of 0.89) and a different bounding box, with object not centered at all.
(TL;DR)
Does tfjs have different implementation of the libraries (tfjs and tfjs-node) that makes different use of the same model? I don't think it can be a problem of input because - after a long search and fight - i figure out two ways to give the image to tf.browser.getPixel in Node (and I'm still wondering why I have to use a "browser" method inside tfjs-node). Anyone made comparisons?
So... that's the code I used, for your reference:
model_web is being loaded with tf.loadGraphModel("file://path/to/model_web/model.json");
two different ways to convert a JPG and make it works with tf.browser.getPixel()
const inkjet = require('inkjet');
const {createCanvas, loadImage} = require('canvas');
const decodeJPGInkjet = (file) => {
return new Promise((rs, rj) => {
fs.readFile(file).then((buffer) => {
inkjet.decode(buffer, (err, decoded) => {
if (err) {
rj(err);
} else {
rs(decoded);
}
});
});
});
};
const decodeJPGCanvas = (file) => {
return loadImage(file).then((image) => {
const canvas = createCanvas(image.width, image.height);
const ctx = canvas.getContext('2d');
ctx.drawImage(image, 0, 0, image.width, image.height);
const data = ctx.getImageData(0, 0, image.width, image.height);
return {data: new Uint8Array(data.data), width: data.width, height: data.height};
});
};
and that's the code that use the loaded model to give predictions - same code for node and browser, found at https://github.com/cloud-annotations/javascript-sdk/blob/master/src/index.js - doesn't works on node as it is, I changed require("#tensorflow/tfjs"); with require("#tensorflow/tfjs-node"); and replaced fetch with fs.read
const runObjectDetectionPrediction = async (graph, labels, input) => {
const batched = tf.tidy(() => {
const img = tf.browser.fromPixels(input);
// Reshape to a single-element batch so we can pass it to executeAsync.
return img.expandDims(0);
});
const height = batched.shape[1];
const width = batched.shape[2];
const result = await graph.executeAsync(batched);
const scores = result[0].dataSync();
const boxes = result[1].dataSync();
// clean the webgl tensors
batched.dispose();
tf.dispose(result);
const [maxScores, classes] = calculateMaxScores(
scores,
result[0].shape[1],
result[0].shape[2]
);
const prevBackend = tf.getBackend();
// run post process in cpu
tf.setBackend("cpu");
const indexTensor = tf.tidy(() => {
const boxes2 = tf.tensor2d(boxes, [result[1].shape[1], result[1].shape[3]]);
return tf.image.nonMaxSuppression(
boxes2,
maxScores,
20, // maxNumBoxes
0.5, // iou_threshold
0.5 // score_threshold
);
});
const indexes = indexTensor.dataSync();
indexTensor.dispose();
// restore previous backend
tf.setBackend(prevBackend);
return buildDetectedObjects(
width,
height,
boxes,
maxScores,
indexes,
classes,
labels
);
};
Do different implementation of the libraries (tfjs and tfjs-node) that makes different use of the same model
If the same model is deployed both in the browser and in nodejs, the prediction will be the same thing.
If the predicted value are different, it might be related to the tensor used for the prediction. The processing from the image to the tensor might be different resulting in different tensors being used for the prediction thus causing the output to be different.
i figure out two ways to give the image to tf.browser.getPixel in Node (and I'm still wondering why I have to use a "browser" method inside tfjs-node)
The canvas package use the system graphic to create the browser like canvas environment that can be used by nodejs. This makes it possible to use tf.browser namespace especially when dealing with image conversion. However it is still possible to use directly nodejs buffer to create a tensor.
I preload and create materials textures before start render and animate of objects.
But THREE JS upload texture to GPU only when object will be shown in camera.
So when new object comes on a screen animation is jerking because of texture sends on GPU.
The question is how to send textures to GPU during creation of texture for avoiding it during runtime?
Loading images to GPU takes a lot of time.
My guess is to walk your object tree, set each object's frustumCulled flag to false, call renderer.render(scene, ...) once, then put the flags back to true (or whatever they were).
function setAllCulled(obj, culled) {
obj.frustumCulled = culled;
obj.children.forEach(child => setAllCulled(child, culled));
}
setAllCulled(scene, false);
renderer.render(scene, camera);
setAllCulled(scene, true);
You can also call renderer.setTexture2D(texture, 0) to force a texture to be initialized.
Try renderer.initTexture(texture: THREE:Texture) for r0.149.0
i want to read back the data stored in the GL.bufferData array in javascript.
Here is my code
var TRIANGLE_VERTEX = geometryNode["triangle_buffer"];
GL.bindBuffer(GL.ARRAY_BUFFER, TRIANGLE_VERTEX);
GL.bufferData(GL.ARRAY_BUFFER,new Float32Array(vertices),GL.STATIC_DRAW);
is it possible in webgl to read back the bufferdata in GPU?
if possible then please explain me with a sample code.
How to know the memory size(filled and free) of the Gpu in webgl at run time and how to debug the shader code and data in GPU in webgl.
It is not directly possible to read the data back in WebGL1. (see below for WebGL2). This is limitation of OpenGL ES 2.0 on which WebGL is based.
There are some workarounds:
You could try to render that data to a texture then use readPixels to read the data.
You'd have to encode the data into bytes in your shader because readPixels in WebGL can only read bytes
You can wrap your WebGL to store the data yourself something like
var buffers = {};
var nextId = 1;
var targets = {};
function copyBuffer(buffer) {
// make a Uint8 view of buffer in case it's not already
var view = new Uint8Buffer(buffer.buffer);
// now copy it
return new UintBuffer(view);
}
gl.bindBuffer = function(oldBindBufferFn) {
return function(target, buffer) {
targets[target] = new Uint8Buffer(buffer.buffer);
oldBindBufferFn(target, buffer);
};
}(gl.bindBuffer.bind(gl));
gl.bufferData = function(oldBufferDataFn) {
return function(target, data, hint) {
var buffer = targets[target];
if (!buffer.id) {
buffer.id = nextId++;
}
buffers[buffer.id] = copyBuffer(data);
oldBufferDataFn(target, data, hint);
};
}(gl.bufferData.bind(gl)));
Now you can get the data with
data = buffers[someBuffer.id];
This is probably what the WebGL Inspector does
Note that there are a few issues with the code above. One it doesn't check for errors. Checking for errors would make it way slower but not checking for error will give you incorrect results if your code generates errors. A simple example
gl.bufferData(someBuffer, someData, 123456);
This would generate an INVALID_ENUM error and not update the data in someBuffer but our code isn't checking for errors so it would have put someData into buffers and if you read that data it wouldn't match what's in WebGL.
Note the code above is pseudo code. For example I didn't supply a wrapper for gl.bufferSubData.
WebGL2
In WebGL2 there is a function gl.getBufferSubData that will allow you to read the contents of a buffer. Note that WebGL2, even though it's based on OpenGL ES 3.0 does not support gl.mapBuffer because there is no performant and safe way to expose that function.