I want to record the multiple layered canvas using MediaRecorder.
But, i don't know how to achieve it...
help me...
this is the my pseudo code
const RECORD_FRAME = 30;
const canvasVideoTrack = canvas.captureStream(RECORD_FRAME).getVideoTracks()[0];
const waterMarkCanvasTrack = waterMarkCanvas.captureStream(RECORD_FRAME).getVideoTracks()[0];
const stream= new MediaStream();
const mediaRecorder = new MediaRecorder(stream);
mediaRecorder.stream.addTrack(canvasVideoTrack)
mediaRecorder.stream.addTrack(waterMarkCanvasTrack)
// .... recording
You need to draw all your canvases on a single one.
Even if we could record multiple video streams (which we can't yet), what you need is to composite these video streams. And for this, you use a canvas:
const prepareCanvasAnim = (color, offset) => {
const canvas = document.createElement("canvas");
const ctx = canvas.getContext("2d");
ctx.fillStyle = color;
let x = offset;
const anim = () => {
x = (x + 1) % canvas.width;
ctx.clearRect(0, 0, canvas.width, canvas.height);
ctx.fillRect(x, offset, 50, 50);
requestAnimationFrame(anim);
}
anim();
return canvas;
}
const canvas1 = prepareCanvasAnim("red", 20);
const canvas2 = prepareCanvasAnim("green", 80);
document.querySelector(".container").append(canvas1, canvas2);
const btn = document.querySelector("button");
const record = (evt) => {
btn.textContent = "Stop Recording";
btn.disabled = true;
setTimeout(() => btn.disabled = false, 5000); // at least 5s recording
// prepare our merger canvas
const canvas = canvas1.cloneNode();
const ctx = canvas.getContext("2d");
ctx.fillStyle = "#FFF";
const toMerge = [canvas1, canvas2];
const anim = () => {
ctx.fillRect(0, 0, canvas.width, canvas.height);
toMerge.forEach(layer => ctx.drawImage(layer, 0, 0));
requestAnimationFrame(anim);
};
anim();
const stream = canvas.captureStream();
const chunks = [];
const recorder = new MediaRecorder(stream);
recorder.ondataavailable = (evt) => chunks.push(evt.data);
recorder.onstop = (evt) => exportVid(chunks);
btn.onclick = (evt) => recorder.stop();
recorder.start();
};
function exportVid(chunks) {
const vid = document.createElement("video");
vid.controls = true;
vid.src = URL.createObjectURL(new Blob(chunks));
document.body.append(vid);
btn.onclick = record;
btn.textContent = "Start Recording";
}
btn.onclick = record;
canvas { border: 1px solid }
.container canvas { position: absolute }
.container:hover canvas { position: relative }
.container { height: 180px }
<div class="container">Hover here to "untangle" the canvases<br></div>
<button>Start Recording</button><br>
But if you're going this way anyway, you might just as well do the merging from the get-go and append a single canvas in the document, the browser's compositor will have less work to do.
Related
hey i am trying to check if the video element in iframe embed has been loaded to return a video.duration the issue i can't figure out how i can check if the video exists only i get undefined error the embed is from mega.nz so the video src will be blob:
let video_error = ""
const iframe = document.createElement('iframe')
iframe.src = "embed link"
document.body.appendChild(iframe);
const canvas = document.createElement('canvas')
const ctx = canvas.getContext('2d')
iframe.addEventListener("load", function() {
var video =iframe.contentWindow.document.getElementsByTagName('video')[0];
video.addEventListener("load", function() {
video.onloadedmetadata = () => {
canvas.width = video.videoWidth
canvas.height = video.videoHeight
const audio = new AudioContext()
const source = audio.createMediaElementSource(video)
const stream_dest = audio.createMediaStreamDestination()
source.connect(stream_dest);
const video_stream = canvas.captureStream(60)
const audio_stream = stream_dest.stream
stream_inject = new MediaStream([
video_stream.getVideoTracks()[0],
audio_stream.getAudioTracks()[0]
])
}
video.onplay = () => {
function step() {
ctx.drawImage(video, 0, 0, canvas.width, canvas.height)
requestAnimationFrame(step)
}
requestAnimationFrame(step);
}
video.onerror = event => {
video_error = event.target.error.message
}
})
});
the question may seem simple, but I can not understand what is my mistake.
I draw images on my canvas, everything is fine, but at the moment when I add clearRect, the elements are completely removed from the canvas without restoring them. Although the cleanup cycle should happen before the rendering
what's my mistake? everything seems to be simple, but I'm stuck with it for a very long time
App main game class
Game class - responsible for rendering to the canvas and clearing it
player - responsible for the logic of the player's behavior
animation - requestanimationframe is called in this function, on the function passed to it
class App {
constructor() {
this.player = new Player("Player 1", 0, 102);
this.game = new Game();
this.game.CreateCanvas();
window.addEventListener('keyup', () => this.player.UpKey());
window.addEventListener('keydown', (event)=> this.player.HandlerKeyPress(event));
new Animation(this.Display.bind(this));
}
Display() {
this.game.ClearCanvas();
this.SetLevel();
}
SetLevel() {
this.game.LoadSprite(assets.mario, [...player_assets.mario.small.standing_right, ...this.player.GetPosition(), 16, 16]);
this.StaticObject();
}
StaticObject () {
for(let i = 0; i < 32; i++){
this.game.LoadSprite(assets.block, [...lvl_1.block.ground.size, i* 16, 134, 16,16]);
this.game.LoadSprite(assets.block, [...lvl_1.block.ground.size, i* 16, 118, 16,16]);
}
}
}
new App();
export default class Game {
CreateCanvas () {
this.canvas = document.createElement("canvas");
this.ctx = this.canvas.getContext("2d");
document.body.appendChild(this.canvas);
}
ClearCanvas() {
this.ctx.clearRect(0, 0, this.canvas.width, this.canvas.height);
}
SetBackgroundColor (color) {
this.ctx.fillStyle = color;
this.ctx.fillRect(0, 0, this.canvas.width, this.canvas.height);
}
LoadSprite (src, position) {
const _image = new Image();
_image.src = src;
_image.onload = () => {
this.ctx.drawImage(_image, ...position, true);
}
}
}
export default class LoopAnimation {
constructor(display) {
this.display = display;
this.Animation = this.Animation.bind(this);
this.Animation();
}
Animation() {
requestAnimationFrame(this.Animation);
this.display();
}
}
My suspicion is it's related to cache and event onload not firing. image.onload event and browser cache
suggests to set the onload property before the src.
var img = new Image();
img.onload = function () {
alert("image is loaded");
}
img.src = "img.jpg";
If you want to reuse loaded images, then store them in an array. Then reuse the image from there. See for example:
const canvas = document.createElement("canvas");
document.body.appendChild(canvas);
canvas.width = 450;
canvas.height = 250;
const ctx = canvas.getContext("2d");
const assets = [
"https://picsum.photos/id/237/150",
"https://picsum.photos/id/235/150",
"https://picsum.photos/id/232/150",
];
const assetsLoaded = assets.map(url =>
new Promise(resolve => {
const img = new Image();
img.onload = e => resolve(img);
img.src = url;
})
);
Promise
.all(assetsLoaded)
.then(images => {
(function gameLoop() {
requestAnimationFrame(gameLoop);
ctx.clearRect(0, 0, canvas.width, canvas.height);
images.forEach((e, i) =>
ctx.drawImage(
e,
i * 150 + Math.cos(Date.now() * 0.003 + i) * 20, // x
Math.sin(Date.now() * 0.005 + i) * 50 + 50 // y
)
);
})();
})
.catch(err => console.error(err))
;
// from https://stackoverflow.com/a/61337279/3807365
Specifically for your question, you can declare a global object var cache={} where its keys are the src of images to load. so, for example:
var cache = {};
LoadSprite(src, position) {
if (cache[src]) {
this.ctx.drawImage(cache[src], ...position, true);
return;
}
const _image = new Image();
_image.src = src;
_image.onload = () => {
cache[src] = _image;
this.ctx.drawImage(_image, ...position, true);
}
}
Hello everyone!
Is it possible to create image with JavaScript then render shapes on it, Then draw it to the game canvas?
Without using dataurl, url, or src, on any of that!
var ctx = document.getElementById("canvas").getContext("2d");
var img = new Image();
// TODO: Draw stuff to the image img
function game() {
window.requestAnimationFrame(game);
ctx.clearRect(0, 0, ctx.canvas.width, ctx.canvas.height);
ctx.drawImage(img, 0, 0, 256, 256);
}
window.requestAnimationFrame(game);
The CanvasRenderingContext2D.drawImage() function can take an multiple types of images as a source, including another Canvas. The example below shows that an image is loaded in the first canvas. Then you can draw on it by holding down the mouse and moving it. When you release the second canvas will draw an image of the first canvas as it is at that moment.
And all the magic is just in the last function.
contextTwo.drawImage(canvasOne, 0, 0, 256, 256);
const canvasOne = document.getElementById('canvas1');
const canvasTwo = document.getElementById('canvas2');
const contextOne = canvasOne.getContext('2d');
const contextTwo = canvasTwo.getContext('2d');
canvasOne.width = 256;
canvasOne.height = 256;
canvasTwo.width = 256;
canvasTwo.height = 256;
const canvasBounds = canvasOne.getBoundingClientRect();
let mouseData = {
isClicked: false,
position: [0, 0],
}
const onMouseDown = event => {
mouseData.isClicked = true;
render();
};
const onMouseMove = ({ clientX, clientY }) => {
const x = clientX - canvasBounds.left;
const y = clientY - canvasBounds.top;
mouseData.position = [x, y];
render();
};
const onMouseUp = event => {
mouseData.isClicked = false;
render();
moveImage();
};
function setup() {
const img = new Image();
img.src = '//picsum.photos/256/256'
img.onload = function() {
contextOne.drawImage(img, 0, 0, 256, 256);
}
canvasOne.addEventListener('mousedown', onMouseDown);
canvasOne.addEventListener('mousemove', onMouseMove);
canvasOne.addEventListener('mouseup', onMouseUp);
}
function render() {
requestAnimationFrame(() => {
const { isClicked, position } = mouseData;
const [ x, y ] = position;
if (isClicked) {
contextOne.beginPath();
contextOne.arc(x, y, 5, 0, Math.PI * 2)
contextOne.fillStyle = 'red'
contextOne.fill();
contextOne.closePath();
}
});
}
function moveImage() {
contextTwo.drawImage(canvasOne, 0, 0, 256, 256);
}
setup();
body {
display: flex;
}
canvas {
width: 256px;
height: 256px;
border: 1px solid #d0d0d0;
}
<canvas id="canvas1"></canvas>
<canvas id="canvas2"></canvas>
I am using a Javascript library called face-api.js.
I need to extract the face from the video frame when face-api detects a face. Could anyone help me to do that part?
const video = document.getElementById('video');
Promise.all([
faceapi.nets.tinyFaceDetector.loadFromUri('/models')
]).then(startVideo)
function startVideo() {
navigator.getUserMedia(
{video: {}},
stream => video.srcObject = stream,
err => console.error(err)
)
}
video.addEventListener('play', () => {
const canvas = faceapi.createCanvasFromMedia(video);
document.body.append(canvas);
const displaySize = {width: video.width, height: video.height};
faceapi.matchDimensions(canvas, displaySize);
setInterval(async () => {
const detections = await faceapi.detectAllFaces(video, new faceapi.TinyFaceDetectorOptions())
console.log('Box: ', detections[0].detection._box);
const resizedDetections = faceapi.resizeResults(detections, displaySize)
canvas.getContext('2d').clearRect(0, 0, canvas.width, canvas.height)
faceapi.draw.drawDetections(canvas, resizedDetections)
}, 5000)
})
Add extractFaceFromBox function to your code, it can extract a face from video frames with giving bounding box and display result into outputimage.
Try this code and enjoy
// This is your code
video.addEventListener('play', () => {
const canvas = faceapi.createCanvasFromMedia(video);
document.body.append(canvas);
const displaySize = {width: video.width, height: video.height};
faceapi.matchDimensions(canvas, displaySize);
setInterval(async () => {
const detections = await faceapi.detectAllFaces(video, new faceapi.TinyFaceDetectorOptions())
//Call this function to extract and display face
extractFaceFromBox(video, detections[0].detection.box)
const resizedDetections = faceapi.resizeResults(detections, displaySize)
canvas.getContext('2d').clearRect(0, 0, canvas.width, canvas.height)
faceapi.draw.drawDetections(canvas, resizedDetections)
}, 5000)
})
let outputImage = document.getElementById('outputImage')
// This function extract a face from video frame with giving bounding box and display result into outputimage
async function extractFaceFromBox(inputImage, box){
const regionsToExtract = [
new faceapi.Rect( box.x, box.y , box.width , box.height)
]
let faceImages = await faceapi.extractFaces(inputImage, regionsToExtract)
if(faceImages.length == 0){
console.log('Face not found')
}
else
{
faceImages.forEach(cnv =>{
outputImage.src = cnv.toDataURL();
})
}
}
This is not specific to face-api.js but you can use canvas to extract an image from a video. Here is a little function I wrote in my case.
const extractFace = async (video,x,y,width, height) => {
const canvas = document.createElement("canvas");
canvas.width = video.videoWidth;
canvas.height = video.videoHeight;
const context = canvas.getContext("2d");
// Get a screenshot from the video
context?.drawImage(video, 0, 0, canvas.width, canvas.height);
const dataUrl = canvas.toDataURL("image/jpeg");
const image = new Image();
image.src = dataUrl;
const canvasImg = document.createElement("canvas");
canvasImg.width = width;
canvasImg.height = height;
const ctx = canvasImg.getContext("2d");
image.onload = () => {
// Crop the image
ctx?.drawImage(image, x, y, width, height, 0, 0, width, height);
canvasImg.toBlob((blob) => {
// Do something with the blob. Alternatively, you can convert it to a DataUrl like the video screenshot
// I was using react so I just called my handler
handSavePhoto(blob);
}, "image/jpeg");
};
};
You don't have to take the screenshot first, you can just go ahead and crop it but I found out after testing that cropping from an image gives consistent results. Here is how you will achieve it in that case.
const extractFace = async (video, x, y, width, height) => {
const canvas = document.createElement("canvas");
canvas.width = width;
canvas.height = height;
const context = canvas.getContext("2d");
// Get a screenshot from the video
context?.drawImage(image, x, y, width, height, 0, 0, width, height);
canvas.toBlob((blob) => {
handSavePhoto(blob);
}, "image/jpeg");
};
With that out of the way, you can now use face-api data to get the face you want.
// assuming your video element is store in video variable
const detections = await faceapi.detectAllFaces(video, new faceapi.TinyFaceDetectorOptions());
const {x, y, width, height} = detections[0].detection.box;
extractFace(video, x, y, width, height);
You can read more about drawImage from here.
Check if detection.length is bigger than 0. It means that it detects something in front of it.
Try this fiddle in Chrome and in Firefox.
https://jsfiddle.net/lvl5hm/xosan6w9/29/
In Chrome it takes about 0.5-2ms to draw video to canvas, but FF for some reason takes 20-40, which is pretty insane.
Is there something that can help me improve FF performance?
const canvas = document.getElementById('canvas')
canvas.width = 500
canvas.height = 300
const ctx = canvas.getContext('2d')
const video = document.createElement('video')
video.src = 'https://static.beeline.ru/upload/images/business/delo/newmain/main.mp4'
video.muted = true
video.loop = true
video.oncanplay = () => {
video.play()
}
const frameCounterElement = document.getElementById('frameCounter')
let duration = 0
setInterval(() => {
frameCounterElement.innerHTML = duration.toFixed(2) + 'ms to render'
}, 400)
function loop() {
const startTime = performance.now()
ctx.drawImage(video, 0, 0, canvas.width, canvas.height)
duration = performance.now() - startTime
requestAnimationFrame(loop)
}
loop()