My code is like this:
function render() {
renderer.render( scene, camera );
renderer.clear();
}
I wonder why it makes my scene black. Is that because the color buffer is cleared before being actually rendered?
In this way it works well:
function render() {
renderer.clear();
renderer.render( scene, camera );
}
But how can I make sure the color buffer has been rendered before I call clear()?
I'm curious about the difference between clearing at the end and at the beginning.
The difference between clearing renderer's framebuffer before and after rendering lies in the way WebGL content is being presented to a web page. The thing is WebGL is always at least double buffered (in WebGL Insights book guys from Mozilla say that in Firefox WebGL canvases actualy are triple-buffered). That means that within a requestAnimationFrame callback (render function in your case) all WebGL calls affect only a so-called back buffer. The other buffer (called front buffer) is unaffected. Then, when the callback ends, browser swaps buffers: the back buffer becomes front buffer, the front one becomes back buffer. The new front buffer is then used by the browser to draw the web page. The new back buffer is drawn to by WebGL next time rAF callback's called. Its important to note that the browser by default doesn't change contents of the buffer upon swapping them (preserveDrawingBuffer context option changes that though).
Going back to your question, the difference is that when you first render the scene and then clear the buffer you first will get strange results since the framebuffer contains rendering result of a same previously rendered frame (you won't see those results on the screen, they'll just be in the buffer's memory), and then you'll make all that irrelevant since you clear the buffer. After that the browser'll present clean buffer to the page as it is, or as black rectangle (or some different color depending on options of the renderer). However, if you clear first and then render a scene, you'll get correct results: first clearing eliminates relicts of a previous frame and then puts new content into it. Then browser presents it to the page.
So, to sum up: we usually clear framebuffer first to remove any traces of previous frames from so we start with a "clean slate" and then render stuff to it.
Related
Background
I'm making a simple online drawing app so that I can practice my JS and canvas skills. It's going really well, with the ability to draw freely, and also the ability to draw straight lines.
How my drawing app works
It registers mouse and touch events, which handle all of the drawing and saving.
Here's some example code:
var tools={
"pencil":{
"started": false,
... (some data used by the pencil tool),
"start":function(e){
if(currentTool!=="pencil")return;
// Make sure that the selected tool on the toolbar is the pencil
// code for mousedown/touchstart
tools.pencil.started=true;
},
"move":function(e){
if(!tools.pencil.started)return;
// code for mousemove/touchmove
},
"end":function(e){
if(!tools.pencil.started)return;
// code for mouseup/touchend
tools.pencil.started=false;
}
},
"other tools":{
...
}
};
// And here would be a function which adds the mouse and touch events
Here is my pencil tool:
var tools={
pencil:{
started:false,
start:function(e){
if(currentTool!=="pencil")return; // Make sure the pencil tool is selected
e.preventDefault();
e=e.clientX?e:e.touches[0]; // Allow touch
context.beginPath(); // Begin drawing
context.moveTo(e.clientX-50,e.clientY); // Set the position of the pencil
tools.pencil.started=true; // Mark the pencil tool as started
},
move:function(e){
if(tools.pencil.started){ // Make sure the pencil is started
e.preventDefault();
e=e.clientX?e:e.touches[0];
context.lineTo(e.clientX-50,e.clientY); // Draw a line
context.stroke(); // Make the line visible
}
},
end:function(e){
if(tools.pencil.started){
e.preventDefault();
//tools.pencil.move(e); // Finish drawing the line
tools.pencil.started=false; // Mark the pencil tool as not started
}
}
}
};
Ignore the -50 parts (they're just to adjust with the sidebar). This works, but doesn't save to localStorage.
Problem
TL;DR: I need to save everything on the canvas into some storage (I was currently using localStorage but anything would work, although I would prefer usage of the client-side only). I can't figure out how to efficiently store it, where 'efficiently' means both fast and accurate (accurate as in it stores the whole line). Lines can be stored, but hand-drawn items I haven't figured out yet.
Explanation:
When the user resizes the window, the canvas resizes to the window size (I made this happen, not a bug). I was able to make it that when you resize, it first saves the drawings onto a temporary canvas and then, after the main canvas resizes, redraws them back. But there's a problem. Here's an example to make it clear:
You open the draw app and fill the screen with drawings.
You open DevTools and some of the drawings get covered
When you close the DevTools, then those drawings are gone.
The reason is because since the canvas got smaller, the drawings that went off of it were lost, and when it came back to the original size, they weren't visible (because they're gone). I decided to save everything to localStorage so that I can retain them (and also for some more features that I may add). The lines are working (since they only need to say, "I am a line, I start at x,y and end at x,y". That's it for a line of any size. But for hand-drawn images, they can go anywhere, so they need to say, "I am a pixel, I am at x,y", but many times for even remotely complex images.
Attempted solution
Whenever they move the mouse, it saves to a variable which then updates to localStorage. The problems with this is that if you go fast, then the lines isn't complete (has holes in it) and localStorage is storing a lot of text.
Question
How can I efficiently store all of the user's hand-drawn (pencil tool) images (client-side operations preferred)?
My suggestion is to consider the target use of this app and work backwards from it.
If you're doing something like a MS Paint drawing app, storing a canvas as a bitmap may be better. To deal with cropping issues, I would suggest that you use a predefined canvas size or a setup stage (with a maximum size to prevent issues of saving to LocalStorage).
If you're worried about image size being restrictive due to memory cap, you can also utilize compression.
ie.
canvas.toDataURL('image/jpeg', 0.5);
👆 will compress the image and return a string that will be localStorage friendly
If, on the other âś‹, you are building something more like an excalidraw app, where you want to maintain the ability to edit any drawn object, then you'll need to store each object individually. Whether you use canvas or SVG doesn't really matter imho, but the SVG can make it a bit easier in the sense that the definition of how to store each element is already established, so you don't need to re-invent the wheel, so-to-speak, just need to implement it. Use of SVG doesn't necessarily preclude you from using canvas, as there are a couple SVG-in-canvas implementation, but if you're doing this as a learning project (especially for canvas), it's likely not the route you want to take.
I'm working on a nonlinear editing animation system with Three.js as the 3D library. I'd like to render a sequence of frames to memory, then play the frames back at an arbitrary frame rate. The idea is that the scene might be too complex to render in real time, so I want to pre-render the frames, then play them back at the target fps. I don't necessarily need interactivity while the animation is playing, but it's important that I see it at full speed.
From these links (How to Render to a Texture in Three.js, Rendering a scene as a texture), I understand how to render to a framebuffer instead of the canvas. Can I store multiple framebuffers then render each of those to the canvas later at a smooth frame rate?
You can try one of the following:
Option a) capture canvas-content, playback as png-sequence (I didn't test that, but it sounds like it could work):
render to the canvas just as you always do
export the canvas as data-URL using canvas.toDataURL()
as huge dataURLs are often a problem with devtools, you might want to consider converting the data-urls to a blob:
Blob from DataURL?
repeat for all frames
playback as png-sequence
Option b) using RenderTargets:
render scene to a render-target/framebuffer
use renderer.readRenderTargetPixels() to read the rendered result into memory (this returns basically a bitmap)
data can be copied into a 2d canvas ImageData instance
Option c) Using Rendertarget-Textures (no downloading from GPU):
render into a rendertarget and create a new one for each frame (there is very likely a limit on how many of them you can keep around, so this might not be the best solution)
image-data is stored on GPU and referenced via rendertarget.texture
use a fullscreen-quad textured with rendertarget.texture for playback. Only needs to rebind the textures for every playback-frame, so this would be the most efficient.
You can use canvas.captureStream() and mediaRecorder to record the scene and then save it. You can later play it as video later without caring about frame rate. You may miss some frames as it has its own performance overhead but it all depends on your use case.
I'm building an application for performing Psychometric Visual Search tests. For example, I'll render a bunch of 'L''s on the screen and, optionally a 'T'. The challenge is to measure the time from the moment the image has been drawn to the monitor, until a key-press from the user indicating a decision as to whether a "target" is present in the image or not.
I'm using Three.js (rather than a 2D library) as I want to evaluate search performance when the scene is rendered orthographically (traditional pyschometric 2D visual search) AND when rendered in perspective as, for example, an educational game might be. Targets/distractors will be 3D cubes, polyhedrons, or text (ie L's and T's) may 50 of them max, so not a very complicated scene at all.
I did spot a prior Q/A where the the goal was to determine when a render was finished so the image could be saved. Adapted for my needs it might look something like this..
function init() {...
mesh = new THREE.Mesh (... //build the targets and distractors
scene.add (mesh);
requestAnimationFrame(startTimerWaitForKeyDown);
render ();
}
Am I correct in my understanding that my startTimerWaitForKeyDown function will get called AFTER the initial render () of the scene has been synced to the screen, as an indication that the browser is ready to render a second frame, and that readiness to render the second frame is implicit evidence that first frame is finished and has been synced to the monitor?
I'm never rendering a second frame..just trying to start a timer at the moment the first render is visible to the test subject...(or the best proxy thereof).
Accuracy to a millisecond or two is fine...I'm just trying to avoid the 16.66ms of render/frame-sync uncertainty.
Thanks!
JD
PS. the prior Q/A is here
THREE.js static scene, detect when webgl render is complete
I've got a WebGL scene, and a set of parameters I can use to balance render quality versus speed. I'd like to display the scene to the user in as high as I can make the quality, as long as the frame rate doesn't drop below some threshold due to this. To achieve this, I have to somehow measure “current” frame rate in response to changes in quality.
But the scene is static as long as the user doesn't interact with it (like rotating camera using the mouse). I don't want to have a loop re-rendering the same scene all the time even if nothing changes. I want to stop rendering if the scene stops moving. Which means I can't simply average the time between successive frames, since I can't distinguish between the renderer being slow and the user just moving his mouse more slowly.
I thought about rendering the scene a number of times at start up, and judge the frame rate from this. But the complexity of the scene might change over time, due to the portion of the scene visible from the current camera position, or due to user interaction outside the canvas. So I have to adapt the quality as the scene changes complexity. Running a calibration loop after every mouse release would perhaps be an option.
I also thought about using the finish call instead of the flush call to accurately measure render time. But while I wait for GL to finish rendering, my application will essentially be unresponsive, in particular won't be able to queue mouse events. Since I envision the rendering to ideally take up all the time between two frames at the target threshold frame rate, that would probably be rather bad. I might get away with using finish instead of flush only on some occasions, like after a mouse release.
What's the best way to achieve a desired frame rate in a WebGL application, or to measure render time on a regular basis?
Why can't you use the average render time?
Just because you render less often, does not mean you cannot average the render times. It will just take a bit longer to get an accurate average.
Once the user starts moving their mouse you'll get an influx of data, and that should quickly give you an average render rate anyway.
I am trying to convert the canvas element on this page to a png using the following snippet (e.g. enter in JavaScript console):
(function convertCanvasToImage(canvas) {
var image = new Image();
image.src = canvas.toDataURL("image/png");
return image;
})($$('canvas')[0]);
Unfortunately, the png I get is completely blank. Notice also that the original canvas goes blank after resizing the page.
Why does the canvas go blank? How can I convert this canvas to a png?
Kevin Reid's preserveDrawingBuffer suggestion is the correct one, but there is (usually) a better option. The tl;dr is the code at the end.
It can be expensive to put together the final pixels of a rendered webpage, and coordinating that with rendering WebGL content even more so. The usual flow is:
JavaScript issues drawing commands to WebGL context
JavaScript returns, returning control to the main browser event loop
WebGL context turns drawing buffer (or its contents) over to the compositor for integration into web page currently being rendered on screen
Page, with WebGL content, displayed on screen
Note that this is different from most OpenGL applications. In those, rendered content is usually displayed directly, rather than being composited with a bunch of other stuff on a page, some of which may actually be on top of and blended with the WebGL content.
The WebGL spec was changed to treat the drawing buffer as essentially empty after Step 3. The code you're running in devtools is coming after Step 4, which is why you get an empty buffer. This change to the spec allowed big performance improvements on platforms where blanking after Step 3 is basically what actually happens in hardware (like in many mobile GPUs). If you want work around this to sometimes make copies of the WebGL content after step 3, the browser would have to always make a copy of the drawing buffer before step 3, which is going to make your framerate drop precipitously on some platforms.
You can do exactly that and force the browser to make the copy and keep the image content accessible by setting preserveDrawingBuffer to true. From the spec:
This default behavior can be changed by setting the preserveDrawingBuffer attribute of the WebGLContextAttributes object. If this flag is true, the contents of the drawing buffer shall be preserved until the author either clears or overwrites them. If this flag is false, attempting to perform operations using this context as a source image after the rendering function has returned can lead to undefined behavior. This includes readPixels or toDataURL calls, or using this context as the source image of another context's texImage2D or drawImage call.
In the example you provided, the code is just changing the context creation line:
gl = canvas.getContext("experimental-webgl", {preserveDrawingBuffer: true});
Just keep in mind that it will force that slower path in some browsers and performance will suffer, depending on what and how you are rendering. You should be fine in most desktop browsers, where the copy doesn't actually have to be made, and those do make up the vast majority of WebGL capable browsers...but only for now.
However, there is another option (as somewhat confusingly mentioned in the next paragraph in the spec).
Essentially, you make the copy yourself before step 2: after all your draw calls have finished but before you return control to the browser from your code. This is when the WebGL drawing buffer is still in tact and is accessible, and you should have no trouble accessing the pixels then. You use the the same toDataUrl or readPixels calls you would use otherwise, it's just the timing that's important.
Here you get the best of both worlds. You get a copy of the drawing buffer, but you don't pay for it in every frame, even those in which you didn't need a copy (which may be most of them), like you do with preserveDrawingBuffer set to true.
In the example you provided, just add your code to the bottom of drawScene and you should see the copy of the canvas right below:
function drawScene() {
...
var webglImage = (function convertCanvasToImage(canvas) {
var image = new Image();
image.src = canvas.toDataURL('image/png');
return image;
})(document.querySelectorAll('canvas')[0]);
window.document.body.appendChild(webglImage);
}
Here's some things to try. I don't know whether either of these should be necessary to make this work, but they might make a difference.
Add preserveDrawingBuffer: true to the getContext attributes.
Try doing this with a later tutorial which does animation; i.e. draws on the canvas repeatedly rather than just once.
toDataURL() read data from Buffer.
we don't need to use preserveDrawingBuffer: true
before read data, we also need to use render()
finally:
renderer.domElement.toDataURL();