This question might be weird but suppose we have a canvas which for example draws some 3D content like this experiment.
Disregarding using ThreeJS, Babylon or any other library to achieve same effect, is it possible to set some interval that copies the birth of every voxel and repeat (redraw) it later.
Simply I want to record the canvas draw process and replay it, without using RTC , video, or images sequence.
What Have been done?
I have been trying with WebGl Context
and Stream Capture, but unfortunately could not achieve the desired result.
Can anyone help with this?
You can wrap the WebGL context and capture all the function calls. An example of wrapping the WebGL context would be something like
const rawgl = document.querySelector("canvas").getContext("webgl");
const gl = wrapContext(rawgl);
gl.viewport(0, 0, gl.canvas.width, gl.canvas.height);
gl.enable(gl.SCISSOR_TEST);
gl.scissor(40, 50, 200, 60);
gl.clearColor(0,1,1,1);
gl.clear(gl.COLOR_BUFFER_BIT);
gl.scissor(60, 40, 70, 90);
gl.clearColor(1,0,1,1);
gl.clear(gl.COLOR_BUFFER_BIT);
function wrapContext(gl) {
const wrapper = {};
for (let name in gl) {
var prop = gl[name];
if (typeof(prop) === 'function') {
wrapper[name] = wrapFunction(gl, name, prop);
} else {
wrapProperty(wrapper, gl, name);
}
}
return wrapper;
}
function wrapFunction(gl, name, origFn) {
// return a function that logs the call and then calls the original func
return function(...args) {
log(`gl.${name}(${[...args].join(", ")});`);
origFn.apply(gl, arguments);
};
}
function wrapProperty(wrapper, gl, name) {
// make a getter because these values are dynamic
Object.defineProperty(wrapper, name, {
enumerable: true,
get: function() {
return gl[name];
},
});
}
function log(...args) {
const elem = document.createElement("pre");
elem.textContent = [...args].join(" ");
document.body.appendChild(elem);
}
canvas { border: 1px solid black; }
pre { margin: 0; }
<canvas></canvas>
In your case instead of logging the calls you'd add them to some array of calls only on the frames you want captured.
You then need to somehow keep track of all the resources (buffers, textures framebuffers, renderbuffers, shaders, programs) and all their parameters (like filtering settings on textures) and you also need to track uniform settings etc.
The WebGL-Inspector does this and can playback frames so it might be a good example. There's also this webgl-capture library.
What you need to capture for your program is up to your program. For example if you know your buffers and textures never change and they're still in memory when you want to playback then maybe you don't need to try to capture the state of buffers and textures which both of the above examples have to do.
I don't know how you did tried with the captureStream method, but on your example page this code does work.
let s = mycanvas.captureStream(),
r = new MediaRecorder(s),
chunks = [];
r.ondataavailable = e => chunks.push(e.data);
r.onstop = e => {
let videoURL = URL.createObjectURL(new Blob(chunks));
doSomethingWith(videoURL);
};
r.start();
setTimeout(_=>r.stop(), 3000); // records 3 seconds
Now that you've got a valid blobURL pointing to your canvas' record, you can play it in a <video> element, and then draw it in your webgl context.
Related
I am trying to set volume over 1 on an audio element, following this article.
https://cwestblog.com/2017/08/17/html5-getting-more-volume-from-the-web-audio-api/
I need to be able to set it more than once, so i've set a global array to store the different results in so that I can adjust based on participants.
This does not seem to be working though, to me I can't tell any difference when the gain is set, am I doing something wrong?
window.audioGainParticipants = [];
function amplifyMedia(participant_id, mediaElem, multiplier) {
const exists = window.audioGainParticipants.find(
(x) => x.participant_id === participant_id
);
let result = null;
if (exists) {
result = exists.result;
} else {
var context = new (window.AudioContext || window.webkitAudioContext)();
result = {
context: context,
source: context.createMediaElementSource(mediaElem),
gain: context.createGain(),
media: mediaElem,
amplify: function (multiplier) {
result.gain.gain.value = multiplier;
},
getAmpLevel: function () {
return result.gain.gain.value;
},
};
result.source.connect(result.gain);
result.gain.connect(context.destination);
window.audioGainParticipants.push({ participant_id, result });
}
result.amplify(multiplier);
}
I call this like this...
const audioElement = document.getElementById(
`audio-${participantId}`
);
amplifyMedia(
`audio-${participantId}`,
audioElement,
volume // number between 0-2
);
That article might be outdated. You're not supposed to directly assign to the gain's value, but instead use a setter method.
setValueAtTime is pretty simple, and should meet your needs. The docs show an example of calling this method on a gain node.
https://developer.mozilla.org/en-US/docs/Web/API/AudioParam/setValueAtTime
There's also setTargetAtTime which is a tiny bit more complex, but should sound better if you need to change settings on something that is currently playing.
https://developer.mozilla.org/en-US/docs/Web/API/AudioParam/setTargetAtTime
I try to program an online version of Kara (the little ladybug you can program to find the leafs laying in the grid). For that, I programmed some functions to draw the needed objects into the grid. Those functions are:
Kara.prototype.draw_object = function (x, y, asset) {
pos = this.calc_pos(x,y);
var img = new Image();
img.onload = function() {
c.drawImage(img, pos.x, pos.y, 40, 40);
}
img.src = asset;
}
Kara.prototype.draw_tree = function(x,y) {
this.draw_object(x,y,this.assets.asset_tree);
}
Kara.prototype.draw_leaf = function(x,y) {
this.draw_object(x,y,this.assets.asset_leaf);
}
Kara.prototype.draw_mush = function(x,y) {
this.draw_object(x,y,this.assets.asset_mush);
}
The asset object is defined in the init functions and point to png files like img/tree.png. calc_pos calculates the absolute x and y position depending on the position in the grid.
There is a lot work to do. Because I would like to draw some sample content every time the browser reloads, I wrote a sample function:
Kara.prototype.sample = function() {
kara = this;
kara.draw_kara(1,1); // Not listed above. Draws the little laddybug
setTimeout(function() {
kara.draw_tree(2,1);
}, 4);
setTimeout(function() {
kara.draw_mush(3,1);
}, 8);
setTimeout(function() {
kara.draw_leaf(4,1);
}, 12);
}
Without the timeout everything is drawn to [4][1]. Because it worked when I typed every single command in the Developer console I tried to find the root of the bug by placing some waiting time between the draw commands.
If there are 0.5 seconds between the commands everything works perfect. With 4 miliseconds it could work properly or some objects are placed above each other.
Does anybody know an issue where this occured or can give me a hint for the origion of this bug?
My code is
canvas.clipTo = function (ctx) {
ctx.beginPath();
for (var i = 0; i < totalPrintArea; i++) {
ctx.save();
ctx.fillStyle = 'rgba(51,51,51,0)';
ctx.rect(clipLft[i], clipTp[i], clipW[i], clipH[i], 'rgba(51,51,51,1)', clipRtn[i]);
ctx.stroke();
ctx.restore();
}
ctx.closePath();
ctx.clip();
canvas.calcOffset();
};
canvas.renderAll();
I am taking values from the red dotted box and apply to clip where multiple masks are generating.
My issue is its taking all properties but not rotation for all.
I want to rotate all the rectangles.
I just get some code to change the rotation for the clip like ctx.rotate(50); but will not work as I want to make all rotate with their own values
Please guide me for the same.
On the original fabricJS github project I saw the comment: https://github.com/kangax/fabric.js/issues/932#issuecomment-27223912
and decided that I need to prevent making ctx.beginPath all the time:
canvas.clipTo = function(ctx) {
var skip = false;
// Workaround to make possible
// making clipTo with
// fabric.Group
var oldBeginPath = ctx.beginPath;
ctx.beginPath = function() {
if (!skip) {
oldBeginPath.apply(this, arguments);
skip = true;
setTimeout(function() {
skip = false;
}, 0);
}
}
group.render(ctx)
};
You can see my workaround to the problem described:
https://jsfiddle.net/freelast/6o0o07p7/
The workaround is not perfect, but hope it will help somebody.
I have tried using the Andrey's answer, but althouth there some interesting points, it didn't work.
If you try to clip the canvas to a single object (e.g. a circle or a rectangle), you can simply do this:
canvas.clipTo = function(ctx) {
shape.render(ctx); //shape is a circle, for instance
}
However, as explained by Kienz and butch2k in the aforementioned comment on GitHub, the problem is that you cannot use this solution with groups. In particular, if you use the following snippet:
canvas.clipTo = function(ctx) {
group.render(ctx);
}
you will only see one object of the group to be used for clipping.
The issue is due to the render method, which calls the ctx.beginPath() and ctx.closePath() for each object in the group. And because only the last couple of beginPath-closePath calls will affect the clipping, you need some workaround.
So in my solution, I have temporarily redefined the ctx.closePath and ctx.beginPath methods (after storing them in other two temporary variables, named oldBeginPath and oldClosePath) so that they do nothing. Then I call oldBeginPath at the beginning, and after rendering all the objects in the group I call the oldClosePath.
And now, here is the (working) snippet:
canvas.clipTo = function(ctx) {
var oldBeginPath = ctx.beginPath;
var oldClosePath = ctx.closePath;
ctx.beginPath = function() {}
ctx.closePath = function() {}
oldBeginPath.apply(ctx);
group.forEachObject(function(shape){
shape.render(ctx);
});
oldClosePath.apply(ctx);
ctx.beginPath = oldBeginPath;
ctx.closePath = oldClosePath;
};
Hope this will save someone's spare time in the future.
I am trying to use InfoVis / JIT to render a force directed graph visualizing a network.
I am a newbie to both java script and JIT.
I have created my own custom node types using following code in my js file, which lets me display my image on the node.
$jit.ForceDirected.Plot.NodeTypes.implement({
'icon1': {
'render': function(node, canvas){
var ctx = canvas.getCtx();
var img = new Image();
img.src='magnify.png';
var pos = node.pos.getc(true);
img.onload = function() {
ctx.drawImage(img, pos.x, pos.y);
};
},
'contains': function(node,pos){
var npos = node.pos.getc(true);
dim = node.getData('dim');
return this.nodeHelper.circle.contains(npos, pos, dim);
//return this.nodeHelper.square.contains(npos, pos, dim);
}
}
I am assigning this custom node type to the node using "$type": "icon1" in the json data object. I do get image on the node, but the problem is that I am not able to hide it when required. I am able to hide the in-built node types like circle,square etc. using following code.
node.setData('alpha', 0);
node.eachAdjacency(function(adj) {
adj.setData('alpha', 0);
});
fd.fx.animate({
modes: ['node-property:alpha',
'edge-property:alpha'],
duration: 2000
});
But the same code does not work for custom nodes.
Hence I tried to temporarily change the type of node to the built-in "circle" type, hid it and then re-setted the type of node to its original i.e. my custom node, icon1.
function hideNode( ){
var typeOfNode = node.getData('type');
node.setData( 'type','circle');
node.setData('alpha', 0);
node.eachAdjacency(function(adj) {
adj.setData('alpha', 0);
});
fd.fx.animate({
modes: ['node-property:alpha',
'edge-property:alpha'],
duration: 2000
});
node.setData('type',typeOfNode );
}
I think this should work but the custom image comes back in a while on the canvas.
If I don't reset the type of node to its original i.e. in the above code and comment out the following statement and call hide function, then the node gets hidden.
node.setData('type',typeOfNode );
I am not able to figure out how by only setting a node's type to some custom type, the node is being rendered. Any help with this question will be appreciated.
I need to re-set the node's type to its original because I want the node to be restored when required by calling unhide function. If I don't reset node's type to the original then it would be rendered as a circle when restored.
I have gone through the API and the google group for JIT but couldn't find an answer.
Can anyone help?
Here's a look at a snippet from the Plot's plotNode function:
var alpha = node.getData('alpha'),
ctx = canvas.getCtx();
ctx.save();
ctx.globalAlpha = alpha;
// snip
this.nodeTypes[f].render.call(this, node, canvas, animating);
ctx.restore();
As you can see, the node's alpha value is applied to the canvas immediately before the node's render function is called. After rendering the node, the canvas is restored to the previous state.
The issue here is that your custom node's render function does not render the node synchronously, and the canvas state is getting restored prior to the call to drawImage. So, you can do one of two things:
1) Preload and cache your image (preferred approach, as this will also prevent image flickering and help with performance):
// preload image
var magnifyImg = new Image();
magnifyImg.src = 'magnify.png';
// 'icon1' node render function:
'render': function(node, canvas){
var ctx = canvas.getCtx();
var pos = node.pos.getc(true);
ctx.drawImage(magnifyImg, pos.x, pos.y);
}
or 2) save the canvas state, reapply the alpha, and then restore the canvas state after drawing the image in your onload handler:
// 'icon1' node render function:
'render': function(node, canvas){
var ctx = canvas.getCtx();
var img = new Image();
img.src='magnify.png';
var pos = node.pos.getc(true);
img.onload = function() {
ctx.save(); // save current canvas state
ctx.globalAlpha = node.getData('alpha'); // apply node alpha
ctx.drawImage(img, pos.x, pos.y); // draw image
ctx.restore(); // revert to previous canvas state
};
}
How can I unit-test Javascript that draws on an HTML canvas? Drawing on the canvas should be checked.
I wrote an example for unit-testing canvas and other image-y types with Jasmine and js-imagediff.
Jasmine Canvas Unit Testing
I find this to be better than making sure specific methods on a mock Canvas have been invoked because different series of methods may produce the same method. Typically, I will create a canvas with the expected value or use a known-stable version of the code to test a development version against.
As discussed in the question comments it's important to check that certain functions have been invoked with suitable parameters. pcjuzer proposed the usage of proxy pattern. The following example (RightJS code) shows one way to do this:
var Context = new Class({
initialize: function($canvasElem) {
this._ctx = $canvasElem._.getContext('2d');
this._calls = []; // names/args of recorded calls
this._initMethods();
},
_initMethods: function() {
// define methods to test here
// no way to introspect so we have to do some extra work :(
var methods = {
fill: function() {
this._ctx.fill();
},
lineTo: function(x, y) {
this._ctx.lineTo(x, y);
},
moveTo: function(x, y) {
this._ctx.moveTo(x, y);
},
stroke: function() {
this._ctx.stroke();
}
// and so on
};
// attach methods to the class itself
var scope = this;
var addMethod = function(name, method) {
scope[methodName] = function() {
scope.record(name, arguments);
method.apply(scope, arguments);
};
}
for(var methodName in methods) {
var method = methods[methodName];
addMethod(methodName, method);
}
},
assign: function(k, v) {
this._ctx[k] = v;
},
record: function(methodName, args) {
this._calls.push({name: methodName, args: args});
},
getCalls: function() {
return this._calls;
}
// TODO: expand API as needed
});
// Usage
var ctx = new Context($('myCanvas'));
ctx.moveTo(34, 54);
ctx.lineTo(63, 12);
ctx.assign('strokeStyle', "#FF00FF");
ctx.stroke();
var calls = ctx.getCalls();
console.log(calls);
You can find a functional demo here.
I have used a similar pattern to implement some features missing from the API. You might need to hack it a bit to fit your purposes. Good luck!
I make really simple canvases and test them with mocha. I do it similarly to Juho Vepsäläinen but mine looks a little simpler. I wrote it in ec2015.
CanvasMock class:
import ContextMock from './ContextMock.js'
export default class {
constructor (width, height)
{
this.mock = [];
this.width = width;
this.height = height;
this.context = new ContextMock(this.mock);
}
getContext (string)
{
this.mock.push('[getContext ' + string + ']')
return this.context
}
}
ContextMock class:
export default class {
constructor(mock)
{
this.mock = mock
}
beginPath()
{
this.mock.push('[beginPath]')
}
moveTo(x, y)
{
this.mock.push('[moveTo ' + x + ', ' + y + ']')
}
lineTo(x, y)
{
this.mock.push('[lineTo ' + x + ', ' + y + ']')
}
stroke()
{
this.mock.push('[stroke]')
}
}
some mocha tests that evaluates the functionality of the mock itself:
describe('CanvasMock and ContextMock', ()=> {
it('should be able to return width and height', ()=> {
let canvas = new CanvasMock(500,600)
assert.equal(canvas.width, 500)
assert.equal(canvas.height, 600)
})
it('should be able to update mock for getContext', ()=> {
let canvas = new CanvasMock(500,600)
let ctx = canvas.getContext('2d')
assert.equal(canvas.mock, '[getContext 2d]')
})
})
A mocha tests that evaluates the functionality of a function that returns a canvas:
import Myfunction from 'MyFunction.js'
describe('MyFuntion', ()=> {
it('should be able to return correct canvas', ()=> {
let testCanvas = new CanvasMock(500,600)
let ctx = testCanvas.getContext('2d')
ctx.beginPath()
ctx.moveTo(0,0)
ctx.lineTo(8,8)
ctx.stroke()
assert.deepEqual(MyFunction(new CanvasMock(500,600), 8, 8), canvas.mock, [ '[getContext 2d]', '[beginPath]', '[moveTo 0, 0]', [lineTo 8, 8]', '[stroke]' ])
})
so in this example myfunction takes the canvas you passed in as an argument ( Myfunction(new CanvasMock(500,600), 8, 8) ) and writes a line on it from 0,0 to whatever you pass in as the arguments ( Myfunction(new CanvasMock(500,600),** 8, 8**) ) and then returns the edited canvas.
so when you use the function in real life you can pass in an actual canvas, not a canvas mock and then it will run those same methods but do actual canvas things.
read about mocks here
Since the "shapes" and "lines" drawn on a canvas are not actual objects (it's like ink on paper), it would be very hard (impossible?) to do a normal unit test on that.
The best you can do with standard canvas it analyze the pixel data (from the putImageData/getImageData. Like what bedraw was saying).
Now, I haven't tried this yet, but it might be more what you need. Cake is a library for the canvas. It's using alot of the putImageData/getImageData. This example might help with what you are trying to do with a test.
Hope that helps answer your question.
I've been looking at canvas testing recently and I've now thought about a page that allows comparing the canvas to a "known good" image version of what the canvas should look like. This would make a visual comparison quick and easy.
And maybe have a button that, assuming the output is OK, updates the image version on the server (by sending the toDataUrl() output to it). This new version can then be used for future comparisons.
Not exactly (at all) automated - but it does make comparing the output of your code easy.
Edit:
Now I've made this:
The left chart is the real canvas whilst the right is an image stored in a database of what it should look like (taken from when I know the code is working). There'll be lots of these to test all (eventually) aspects of my code.
From a developer's point of view the canvas is almost write-only because once drawn it's difficult to programmatically get something useful back. Sure one can do a point by point recognition but that's too tedious and such tests are hard to be written and maintained.
It's better to intercept the calls made to a canvas object and investigate those. Here are a few options:
Create a wrapper object that records all the calls. Juho Vepsäläinen posted a such example.
If possible use a library like frabric.js that offers a higher level of abstraction for drawing. The "drawings" are JS objects that can be inspected directly or converted to SVG which is easier to inspect and test.
Use Canteen to intercept all the function calls and attribute changes of a canvas object. This is similar with option 1.
Use Canteen with rabbit which offers you a few Jasmine custom matchers for size and alignment and a function getBBox() that can be used to determine the size and the position of the stuff being drawn on the canvas.