HTML 5 Canvas drawImage is droping the quality of orginal video - javascript

I wanted to display same video in two area of the application. So using canvas its working fine but the quality of original video is getting dropped but canvas video quality is fine.
var canvas = document.getElementById('shrinkVideo');
var context = canvas.getContext('2d');
var video = document.getElementById('mainVideo');
video.addEventListener('play', () => {
// canvas.width = 270;
// canvas.height = 480;
this.draw(video, context, canvas.width,canvas.height);
}, false);
draw(v, c, w, h) {
if (v.paused || v.ended) return false;
c.drawImage(v, 0, 0, w, h);
setTimeout(this.draw, 20, v, c, w, h);
}
This is my code to sync two video's and it is working fine but 'mainVideo' quality gets dropped.
But if I remove all the canvas code and just play 'mainVideo' the quality is maintained but using canvas its quality get dropped.
Expected Result This is output of the video when canvas code is not added
Actual Result This is output I am getting after adding the canvas code
Thanks In Advance

I came to this answer because I thought I was experiencing the same issue.
I have 1080p source on a element (HD content from a HDMI capture device, which registers as a webcam in the browser)
I had a 1920x1080 canvas and I was using ctx.drawImage(video, 0, 0, 1920, 1080) - as mentioned by a commenter above, I think I've found it crucial that you draw only in good multiples of the original height/width values.
I tried with-and-without imageSmoothingEnabled and various imageSmoothingQuality settings in Chrome/Brave; ultimately, I saw no vast difference with these settings.
My canvas on my webapp was still coming out extremely blurry -- unable to read even
~24pt font on the screen, basically couldn't use the video at all
I was frustrated by my blurry video so I recreated a full test in a "clean suite" here and now I experience no scaling issues anymore -- I don't know what my main application is doing differently yet, but in this example, you can attach any 1080p/720p device and see it scaled quite nicely to 1080p (change the resolution in the JS file if you want to scale to 720p)
https://playcode.io/543520
const WIDTH = 1920;
const HEIGHT = 1080;
const video = document.getElementById('video1'); // Video element
const broadcast = document.getElementById('broadcast'); // Canvas element
broadcast.width = video.width = WIDTH;
broadcast.height = video.height = HEIGHT;
let ctx = broadcast.getContext('2d')
onFrame = function() {
ctx.drawImage(video, 0, 0, broadcast.width, broadcast.height)
window.requestAnimationFrame(onFrame);
}
navigator.mediaDevices
.getUserMedia({ video: true })
.then(stream => {
window.stream = stream
console.log('got UM')
video.srcObject = stream;
window.onFrame();
})
Below you can see my viewport, with a Video and Canvas element (both 1080p, scaled to ~45% so they fit), using requestAnimationFrame to draw my content. The content is zoomed out, so you can see anti-aliasing, but if you load the example and click on the Canvas, it goes to Fullscreen, and the quality is pretty good - I played a 1080p Youtube video on my source machine, and couldn't see any difference on my full screen 1080p canvas element.

Related

Out of memory at ImageData creation angular when extracting frames from video

I'm trying to extract frames from a video, and it's working fine. But when i try to extract videos longer than 1 minutes i got
RangeError: Failed to execute 'getImageData' on 'CanvasRenderingContext2D': Out of memory at ImageData creation
i've extracted 1100 frames just fine, that's around 1 minute video with 24fps. Is there any limit when extracting frames from a video at once?
here's how i extracted those frames
this.video.getFrames(this.file, this.totalFrame, VideoToFramesMethod.totalFrames).then((frames) => {
frames.forEach((frame, i) => {
let img = document.createElement('img');
var canvas = document.createElement('canvas');
canvas.width = frame.width;
canvas.height = frame.height;
canvas.getContext('2d').putImageData(frame, 0, 0);
img.src = canvas.toDataURL('image/jpeg');
this.listOfImages.push({
id: i,
name: 'Frame ' + i,
src: img.src
});
});
this.addCheckboxes();
this.loading = false;
this.playVideo = true;
this.poster = this.listOfImages[0];
});
EDIT
What i need from images that have been extracted is, later the user can draw a points on to the images to determine some object, i use fabricjs for this, but i don't manipulate the images just add new image (which is fabricjs canvas) on top of that image. you can try it here https://bory-cam.web.app
And after that, my algorithm can track the movement of the object inside the points. Also i need to able to show each frame. My first is works, but it's too expensive to memory. So i tried this solution but it really slow. I wanna try this solution using webcodecs but i can't make it work with angular

How to Center Image/CreateCapture on p5 Canvas

I've set up a split-screen canvas, currently with only one canvas on the left side of the screen. I want to display my webcam output on the left-hand side, and my current solution is to match the dimensions of the canvas with the webcam output. However, although this fits the canvas it also shrinks the width dimension of the output. I was wondering if there's any way I can map the capture directly to the canvas rather than have to match the dimensions and the positions on the screen. I've tried looking for the available functions for a createCapture object but there doesn't seem to be any useful functions. Any help would be appreciated. I have shown my code below:
function setup() {
createCanvas(windowWidth / 2, windowHeight);
webcam = createCapture(VIDEO, function(stream) {
recorder = new MediaRecorder(stream, {});
});
webcam.size(windowWidth / 2, windowHeight);
}
function draw() {
clear();
image(webcam, 0, 0, windowWidth, windowHeight);
}

javascript getdisplaymedia record at higher resolution

I was trying to make a basic media recorder with the MediaRecorder API which is fairly straight forward: get the stream from getDisplayMedia, then record it.
The problem: This only records the maximum screen size, but no more. So if my screen is 1280/720, it will not record 1920/1080.
This may seem quite obvious, but my intent is that it should record the smaller resolution inside of the bigger one. For example:
With the red rectangle representing what my actual screen is recording, and the surrounding black rectangle is simply black space, but the entire video is now a higher resolution, 1920/1080, which is useful for youtube, since youtube scales down anything that is in between 720 and 1080 resolution, which is a problem.
Anyway I tried simply adding the stream from getDisplayMedia to a video element video vid.srcObject = stream, then made a new canvas with the resolution 1920/1080, and in the animate loop just did ctx.drawImage(vid, offsetX, offsetY), and outside of the loop, where the MediaRecorder was made, simply did newStream = myCanvas.captureStream() as per the documentation of the API, and passed that to the MediaRecorder; however, the problem is that because of the huge canvas overhead, everything is really slow and the framerate is absolutely terrible (don't have video example, but just test it yourself).
So is there some way to optimize the canvas to not affect the framerate (tried looking into OffscreenCanvas but I couldn't find a way to get the stream from it itself to use with MediaRecorder, so it didn't really help), or is there a better way to capture and record the canvas, or is there a better way to record the screen within a larger resolution, in client-size JavaScript? If not with client-size JavaScript, is there some kind of real-time video encoder (ffmpeg is too slow) that could be run on the server, and each frame of the canvas could be sent to the server and saved there? Is there some better way to make a video recorder with any kind of JavaScript -- client or server or both?
Don't know what your code looks like, but I managed to get a smooth experience with this piece of code:
(You will also find very good example here: https://mozdevs.github.io/MediaRecorder-examples/)
<!doctype html>
<head>
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, initial-scale=1">
<script src="script.js"></script>
</head>
<body>
<canvas id="canvas" style="background: black"></canvas>
</body>
// DISCLAIMER: The structure of this code is largely based on examples
// given here: https://mozdevs.github.io/MediaRecorder-examples/.
window.onload = function () {
navigator.mediaDevices.getDisplayMedia({
video: true
})
.then(function (stream) {
var video = document.createElement('video');
// Use "video.srcObject = stream;" instead of "video.src = URL.createObjectURL(stream);" to avoid
// errors in the examples of https://mozdevs.github.io/MediaRecorder-examples/
// credits to https://stackoverflow.com/a/53821674/5203275
video.srcObject = stream;
video.addEventListener('loadedmetadata', function () {
initCanvas(video);
});
video.play();
});
};
function initCanvas(video) {
var canvas = document.getElementById('canvas');
// Margins around the video inside the canvas.
var xMargin = 100;
var yMargin = 100;
var videoWidth = video.videoWidth;
var videoHeight = video.videoHeight;
canvas.width = videoWidth + 2 * xMargin;
canvas.height = videoHeight + 2 * yMargin;
var context = canvas.getContext('2d');
var draw = function () {
// requestAnimationFrame(draw) will render the canvas as fast as possible
// if you want to limit the framerate a particular value take a look at
// https://stackoverflow.com/questions/19764018/controlling-fps-with-requestanimationframe
requestAnimationFrame(draw);
context.drawImage(video, xMargin, yMargin, videoWidth, videoHeight);
};
requestAnimationFrame(draw);
}

Javascript Canvas draw image to buffer only

I'm using Canvas to process an image in various ways using it's image data only. I'm getting the image like this:
const vid = document.querySelector('video');
const canvas = document.querySelector('canvas');
const context = canvas.getContext('2d');
But if I don't physically draw the image on the screen like this:
context.drawImage(vid, 0, 0, canvas.width, canvas.height);
then I can't process it's image data...is there any way this can all be done in data? I have no use to display the image in my app at all. Is there any way to draw it to the buffer only or something?
All I'm doing is capturing the image and then processing it's image data so I have no need to display it anywhere.
You can dynamically create a canvas element using Javascript like
var canvas=document.createElement("canvas");
get it's context
var context = canvas.getContext('2d');
and ultimately don't append it to the DOM by omitting
document.body.appendChild(canvas);
Even though you're still able to do all drawing operations e.g. drawImage()

First frame from captureStream() not sending

We're working on a project where people can be in a chat room with their webcams, and they can grab a snapshot of someone's cam at that moment, do some annotations on top of it, and then share that modified picture as if it was their own webcam (like sharing a whiteboard).
Capturing the webcam stream into a canvas element where it can be edited was relatively easy. Finding the canvas element on our page and doing a .getContext('2d') on it,
Used an open library to add editing tools to it. Grabbing a stream from that canvas was done like so:
var canvasToSend = document.querySelector('canvas');
var stream = canvasToSend.captureStream(60);
var room = osTwilioVideoWeb.getConnectedRoom();
var mytrack = null;
room.localParticipant.publishTrack(stream.getTracks()[0]).then((publication) => {
mytrack = publication.track;
var videoElement = mytrack.attach();
});
This publishes the stream alright, but the first frame will not get sent unless you draw something else on the canvas. Let's say you drew 2 circles and then hit Share, the stream will start but will not be shown on the recipients' side unless you draw a line, or another circle, or anything. It seems like it needs a frame change for it to send data over.
I was able to force this with developer tools by doing something like context.fill();, but when I tried adding this after the publishing function, even in a then()... no luck.
Any ideas on how to force this "refresh" to happen?
So it seems it is expected behavior (and thus would make my FF buggy).
From the specs about the frame request algorithm:
A new frame is requested from the canvas when frameCaptureRequested is true and the canvas is painted.
Let's put some emphasis on the "and the canvas as been painted". This means that we need both these conditions, and while captureStream itself, or its frameRate argument ellapsing or a method like requestFrame would all set the frameCaptureRequested flag to true, we still need the new painting...
The specs even have a note stating
This algorithm results in a captured track not starting until something changes in the canvas.
And Chrome indeed seems to generate an empty CanvasCaptureMediaStreamTrack if the call to captureStream has been made after the canvas has been painted.
const ctx = document.createElement('canvas')
.getContext('2d');
ctx.fillRect(0,0,20,20);
// let's request a stream from before it gets painted
// (in the same frame)
const stream1 = ctx.canvas.captureStream();
vid1.srcObject = stream1;
// now let's wait that a frame ellapsed
// (rAF fires before next painting, so we need 2 of them)
requestAnimationFrame(()=>
requestAnimationFrame(()=> {
const stream2 = ctx.canvas.captureStream();
vid2.srcObject = stream1;
})
);
<p>stream initialised in the same frame as the drawings (i.e before paiting).</p>
<video id="vid1" controls autoplay></video>
<p>stream initialised after paiting.</p>
<video id="vid2" controls autoplay></video>
So to workaround this, you should be able to get a stream with a frame by requesting the stream from the same operation as a first drawing on the canvas, like stream1 in above example.
Or, you could redraw the canvas context over itself (assuming it is a 2d context) by calling ctx.drawImage(ctx.canvas,0,0) after having set its globalCompositeOperation to 'copy' to avoid transparency issues.
const ctx = document.createElement('canvas')
.getContext('2d');
ctx.font = '15px sans-serif';
ctx.fillText('if forced to redraw it should work', 20, 20);
// produce a silent stream again
requestAnimationFrame(() =>
requestAnimationFrame(() => {
const stream = ctx.canvas.captureStream();
forcePainting(stream);
vid.srcObject = stream;
})
);
// beware will work only for canvas intialised with a 2D context
function forcePainting(stream) {
const ctx = (stream.getVideoTracks()[0].canvas ||
stream.canvas) // FF has it wrong...
.getContext('2d');
const gCO = ctx.globalCompositeOperation;
ctx.globalCompositeOperation = 'copy';
ctx.drawImage(ctx.canvas, 0, 0);
ctx.globalCompositeOperation = gCO;
}
<video id="vid" controls autoplay></video>

Categories