It is known that iOS Safari does not support canvas.captureStream() to (e.g.) pipe its content into a video element, see this demo not working in iOS Safari.
However, canvas.captureStream() is a valid function in iOS Safari, and correctly returns a CanvasCaptureMediaStreamTrack, it just doesn't function as intended. In order to detect browsers that don't support canvas.captureStream, it would have been easy to do a test typeof canvas.captureStream === 'function', but at least for iOS Safari, we can't rely on that. Neither can we rely on the type of the returned value.
How do I write JavaScript that detects whether the current browser effectively supports canvas.captureStream()?
No iOS to test it here, but according to the comments on the issue you linked to, captureStream() actually works, what doesn't is the HTMLVideoElement's reading of this MediaStream. So that's what you actually want to test.
According to the messages there, the video doesn't even fail to load the video (i.e the metadata are correctly set and I don't expect events like error to fire, though if it does, then it's quite simple to test: check if a video is able to play such a MediaStream.
function testReadingOfCanvasCapturedStream() {
// first check the DOM API is available
if( !testSupportOfCanvasCapureStream() ) {
return Promise.resolve(false);
}
// create a test canvas
const canvas = document.createElement("canvas");
// we need to init a context on the canvas
const ctx = canvas.getContext("2d");
const stream = canvas.captureStream();
const vid = document.createElement("video");
vid.muted = true;
vid.playsInline = true;
vid.srcObject = stream;
let supports = false;
// Safari needs we draw on the canvas
// asynchronously after we requested the MediaStream
setTimeout(() => ctx.fillRect(0,0,5,5));
// if it failed, .play() would be enough
// but according to the comments on the issue, it isn't
return vid.play()
.then(() => supports = true)
.finally(() => {
// clean
stream.getTracks().forEach(track => track.stop());
return supports;
});
}
function testSupportOfCanvasCapureStream() {
return "function" === typeof HTMLCanvasElement.prototype.captureStream;
}
testReadingOfCanvasCapturedStream()
.then(supports => console.log(supports));
But if the video is able to play, but no images is painted, then we have to go a bit deeper and check what has been painted on the video. To do this, we'll draw some color on the canvas, wait for the video to have loaded and draw it back on the canvas before checking the color of the frame on the canvas:
async function testReadingOfCanvasCapturedStream() {
// first check the DOM API is available
if( !testSupportOfCanvasCapureStream() ) {
return false;
}
// create a test canvas
const canvas = document.createElement("canvas");
// we need to init a context on the canvas
const ctx = canvas.getContext("2d");
const stream = canvas.captureStream();
const clean = () => stream.getTracks().forEach(track => track.stop());
const vid = document.createElement("video");
vid.muted = true;
vid.srcObject = stream;
// Safari needs we draw on the canvas
// asynchronously after we requested the MediaStream
setTimeout(() => {
// we draw in a well knwown color
ctx.fillStyle = "#FF0000";
ctx.fillRect(0,0,300,150);
});
try {
await vid.play();
}
catch(e) {
// failed to load, no need to go deeper
// it's not supported
clean();
return false;
}
// here we should have our canvas painted on the video
// let's keep this image on the video
await vid.pause();
// now draw it back on the canvas
ctx.clearRect(0,0,300,150);
ctx.drawImage(vid,0,0);
const pixel_data = ctx.getImageData(5,5,1,1).data;
const red_channel = pixel_data[0];
clean();
return red_channel > 0; // it has red
}
function testSupportOfCanvasCapureStream() {
return "function" === typeof HTMLCanvasElement.prototype.captureStream;
}
testReadingOfCanvasCapturedStream()
.then(supports => console.log(supports));
Related
I'm working on a client-side project which lets a user supply a video file and apply basic manipulations to it. I'm trying to extract the frames from the video reliably. At the moment I have a <video> which I'm loading selected video into, and then pulling out each frame as follows:
Seek to the beginning
Pause the video
Draw <video> to a <canvas>
Capture the frame from the canvas with .toDataUrl()
Seek forward by 1 / 30 seconds (1 frame).
Rinse and repeat
This is a rather inefficient process, and more specifically, is proving unreliable as I'm often getting stuck frames. This seems to be from it not updating the actual <video> element before it draws to the canvas.
I'd rather not have to upload the original video to the server just to split the frames, and then download them back to the client.
Any suggestions for a better way to do this are greatly appreciated. The only caveat is that I need it to work with any format the browser supports (decoding in JS isn't a great option).
[2021 update]: Since this question (and answer) has first been posted, things have evolved in this area, and it is finally time to make an update; the method that was exposed here went out-of-date, but luckily a few new or incoming APIs can help us better in extracting video frames:
The most promising and powerful one, but still under development, with a lot of restrictions: WebCodecs
This new API unleashes access to the media decoders and encoders, enabling us to access raw data from video frames (YUV planes), which may be a lot more useful for many applications than rendered frames; and for the ones who need rendered frames, the VideoFrame interface that this API exposes can be drawn directly to a <canvas> element or converted to an ImageBitmap, avoiding the slow route of the MediaElement.
However there is a catch, apart from its current low support, this API needs that the input has been demuxed already.
There are some demuxers online, for instance for MP4 videos GPAC's mp4box.js will help a lot.
A full example can be found on the proposal's repo.
The key part consists of
const decoder = new VideoDecoder({
output: onFrame, // the callback to handle all the VideoFrame objects
error: e => console.error(e),
});
decoder.configure(config); // depends on the input file, your demuxer should provide it
demuxer.start((chunk) => { // depends on the demuxer, but you need it to return chunks of video data
decoder.decode(chunk); // will trigger our onFrame callback
})
Note that we can even grab the frames of a MediaStream, thanks to MediaCapture Transform's MediaStreamTrackProcessor.
This means that we should be able to combine HTMLMediaElement.captureStream() and this API in order to get our VideoFrames, without the need for a demuxer. However this is true only for a few codecs, and it means that we will extract frames at reading speed...
Anyway, here is an example working on latest Chromium based browsers, with chrome://flags/#enable-experimental-web-platform-features switched on:
const frames = [];
const button = document.querySelector("button");
const select = document.querySelector("select");
const canvas = document.querySelector("canvas");
const ctx = canvas.getContext("2d");
button.onclick = async(evt) => {
if (window.MediaStreamTrackProcessor) {
let stopped = false;
const track = await getVideoTrack();
const processor = new MediaStreamTrackProcessor(track);
const reader = processor.readable.getReader();
readChunk();
function readChunk() {
reader.read().then(async({ done, value }) => {
if (value) {
const bitmap = await createImageBitmap(value);
const index = frames.length;
frames.push(bitmap);
select.append(new Option("Frame #" + (index + 1), index));
value.close();
}
if (!done && !stopped) {
readChunk();
} else {
select.disabled = false;
}
});
}
button.onclick = (evt) => stopped = true;
button.textContent = "stop";
} else {
console.error("your browser doesn't support this API yet");
}
};
select.onchange = (evt) => {
const frame = frames[select.value];
canvas.width = frame.width;
canvas.height = frame.height;
ctx.drawImage(frame, 0, 0);
};
async function getVideoTrack() {
const video = document.createElement("video");
video.crossOrigin = "anonymous";
video.src = "https://upload.wikimedia.org/wikipedia/commons/a/a4/BBH_gravitational_lensing_of_gw150914.webm";
document.body.append(video);
await video.play();
const [track] = video.captureStream().getVideoTracks();
video.onended = (evt) => track.stop();
return track;
}
video,canvas {
max-width: 100%
}
<button>start</button>
<select disabled>
</select>
<canvas></canvas>
The easiest to use, but still with relatively poor browser support, and subject to the browser dropping frames: HTMLVideoElement.requestVideoFrameCallback
This method allows us to schedule a callback to whenever a new frame will be painted on the HTMLVideoElement.
It is higher level than WebCodecs, and thus may have more latency, and moreover, with it we can only extract frames at reading speed.
const frames = [];
const button = document.querySelector("button");
const select = document.querySelector("select");
const canvas = document.querySelector("canvas");
const ctx = canvas.getContext("2d");
button.onclick = async(evt) => {
if (HTMLVideoElement.prototype.requestVideoFrameCallback) {
let stopped = false;
const video = await getVideoElement();
const drawingLoop = async(timestamp, frame) => {
const bitmap = await createImageBitmap(video);
const index = frames.length;
frames.push(bitmap);
select.append(new Option("Frame #" + (index + 1), index));
if (!video.ended && !stopped) {
video.requestVideoFrameCallback(drawingLoop);
} else {
select.disabled = false;
}
};
// the last call to rVFC may happen before .ended is set but never resolve
video.onended = (evt) => select.disabled = false;
video.requestVideoFrameCallback(drawingLoop);
button.onclick = (evt) => stopped = true;
button.textContent = "stop";
} else {
console.error("your browser doesn't support this API yet");
}
};
select.onchange = (evt) => {
const frame = frames[select.value];
canvas.width = frame.width;
canvas.height = frame.height;
ctx.drawImage(frame, 0, 0);
};
async function getVideoElement() {
const video = document.createElement("video");
video.crossOrigin = "anonymous";
video.src = "https://upload.wikimedia.org/wikipedia/commons/a/a4/BBH_gravitational_lensing_of_gw150914.webm";
document.body.append(video);
await video.play();
return video;
}
video,canvas {
max-width: 100%
}
<button>start</button>
<select disabled>
</select>
<canvas></canvas>
For your Firefox users, Mozilla's non-standard HTMLMediaElement.seekToNextFrame()
As its name implies, this will make your <video> element seek to the next frame.
Combining this with the seeked event, we can build a loop that will grab every frame of our source, faster than reading speed (yeah!).
But this method is proprietary, available only in Gecko based browsers, not on any standard tracks, and probably gonna be removed in the future when they'll implement the methods exposed above.
But for the time being, it is the best option for Firefox users:
const frames = [];
const button = document.querySelector("button");
const select = document.querySelector("select");
const canvas = document.querySelector("canvas");
const ctx = canvas.getContext("2d");
button.onclick = async(evt) => {
if (HTMLMediaElement.prototype.seekToNextFrame) {
let stopped = false;
const video = await getVideoElement();
const requestNextFrame = (callback) => {
video.addEventListener("seeked", () => callback(video.currentTime), {
once: true
});
video.seekToNextFrame();
};
const drawingLoop = async(timestamp, frame) => {
if(video.ended) {
select.disabled = false;
return; // FF apparently doesn't like to create ImageBitmaps
// from ended videos...
}
const bitmap = await createImageBitmap(video);
const index = frames.length;
frames.push(bitmap);
select.append(new Option("Frame #" + (index + 1), index));
if (!video.ended && !stopped) {
requestNextFrame(drawingLoop);
} else {
select.disabled = false;
}
};
requestNextFrame(drawingLoop);
button.onclick = (evt) => stopped = true;
button.textContent = "stop";
} else {
console.error("your browser doesn't support this API yet");
}
};
select.onchange = (evt) => {
const frame = frames[select.value];
canvas.width = frame.width;
canvas.height = frame.height;
ctx.drawImage(frame, 0, 0);
};
async function getVideoElement() {
const video = document.createElement("video");
video.crossOrigin = "anonymous";
video.src = "https://upload.wikimedia.org/wikipedia/commons/a/a4/BBH_gravitational_lensing_of_gw150914.webm";
document.body.append(video);
await video.play();
return video;
}
video,canvas {
max-width: 100%
}
<button>start</button>
<select disabled>
</select>
<canvas></canvas>
The least reliable, that did stop working over time: HTMLVideoElement.ontimeupdate
The strategy pause - draw - play - wait for timeupdate used to be (in 2015) a quite reliable way to know when a new frame got painted to the element, but since then, browsers have put serious limitations on this event which was firing at great rate and now there isn't much information we can grab from it...
I am not sure I can still advocate for its use, I didn't check how Safari (which is currently the only one without a solution) handles this event (their handling of medias is very weird for me), and there is a good chance that a simple setTimeout(fn, 1000 / 30) loop is actually more reliable in most of the cases.
Here's a working function that was tweaked from this question:
async function extractFramesFromVideo(videoUrl, fps = 25) {
return new Promise(async (resolve) => {
// fully download it first (no buffering):
let videoBlob = await fetch(videoUrl).then((r) => r.blob());
let videoObjectUrl = URL.createObjectURL(videoBlob);
let video = document.createElement("video");
let seekResolve;
video.addEventListener("seeked", async function () {
if (seekResolve) seekResolve();
});
video.src = videoObjectUrl;
// workaround chromium metadata bug (https://stackoverflow.com/q/38062864/993683)
while (
(video.duration === Infinity || isNaN(video.duration)) &&
video.readyState < 2
) {
await new Promise((r) => setTimeout(r, 1000));
video.currentTime = 10000000 * Math.random();
}
let duration = video.duration;
let canvas = document.createElement("canvas");
let context = canvas.getContext("2d");
let [w, h] = [video.videoWidth, video.videoHeight];
canvas.width = w;
canvas.height = h;
let frames = [];
let interval = 1 / fps;
let currentTime = 0;
while (currentTime < duration) {
video.currentTime = currentTime;
await new Promise((r) => (seekResolve = r));
context.drawImage(video, 0, 0, w, h);
let base64ImageData = canvas.toDataURL();
frames.push(base64ImageData);
currentTime += interval;
}
resolve(frames);
});
}
Usage:
let frames = await extractFramesFromVideo("https://example.com/video.webm");
Note that there's currently no easy way to determine the actual/natural frame rate of a video unless perhaps you use ffmpeg.js, but that's a 10+ megabyte javascript file (since it's an emscripten port of the actual ffmpeg library, which is obviously huge).
2023 answer:
If you want to extract all frames reliably (i.e. no "seeking" and missing frames), and do so as fast as possible (i.e. not limited by playback speed or other factors) then you probably want to use the WebCodecs API. As of writing it's supported in Chrome and Edge. Other browsers will soon follow - hopefully by the end of 2023 there will be wide support.
I put together a simple library for this, but it currently only supports mp4 files. Here's an example:
<canvas id="canvasEl"></canvas>
<script type="module">
import getVideoFrames from "https://deno.land/x/get_video_frames#v0.0.8/mod.js"
let ctx = canvasEl.getContext("2d");
// `getVideoFrames` requires a video URL as input.
// If you have a file/blob instead of a videoUrl, turn it into a URL like this:
let videoUrl = URL.createObjectURL(fileOrBlob);
await getVideoFrames({
videoUrl,
onFrame(frame) { // `frame` is a VideoFrame object: https://developer.mozilla.org/en-US/docs/Web/API/VideoFrame
ctx.drawImage(frame, 0, 0, canvasEl.width, canvasEl.height);
frame.close();
},
onConfig(config) {
canvasEl.width = config.codedWidth;
canvasEl.height = config.codedHeight;
},
});
URL.revokeObjectURL(fileOrBlob); // revoke URL to prevent memory leak
</script>
Demo: https://jsbin.com/mugoguxiha/edit?html,output
Github: https://github.com/josephrocca/getVideoFrames.js
(Note that the WebCodecs API is mentioned in #Kaiido's excellent answer, but this API alone unfortunately doesn't solve the issue - the example above uses mp4box.js to handle the stuff that the WebCodecs doesn't handle. Perhaps WebCodecs will eventually support the container side of things and this answer will become mostly irrelevant, but until then I hope that this is useful.)
The situation
I need to do the following:
Get the video from a <video> and play inside a <canvas>
Record the stream from the canvas as a Blob
That's it. The first part is okay.
For the second part, I managed to record a Blob. The problem is that the Blob is empty.
The view
<video id="video" controls="true" src="http://upload.wikimedia.org/wikipedia/commons/7/79/Big_Buck_Bunny_small.ogv"></video>
<canvas id="myCanvas" width="532" height="300"></canvas>
The code
// Init
console.log(MediaRecorder.isTypeSupported('video/webm')) // true
const canvas = document.querySelector("canvas")
const ctx = canvas.getContext("2d")
const video = document.querySelector("video")
// Start the video in the player
video.play()
// On play event - draw the video in the canvas
video.addEventListener('play', () => {
function step() {
ctx.drawImage(video, 0, 0, canvas.width, canvas.height)
requestAnimationFrame(step)
}
requestAnimationFrame(step);
// Init stream and recorder
const stream = canvas.captureStream()
const recorder = new MediaRecorder(stream, {
mimeType: 'video/webm',
});
// Get the blob data when is available
let allChunks = [];
recorder.ondataavailable = function(e) {
console.log({e}) // img1
allChunks.push(e.data);
}
// Start to record
recorder.start()
// Stop the recorder after 5s and check the result
setTimeout(() => {
recorder.stop()
const fullBlob = new Blob(allChunks, { 'type' : 'video/webm' });
const downloadUrl = window.URL.createObjectURL(fullBlob)
console.log({fullBlob}) // img2
}, 5000);
})
The result
This the console.log of the ondataavailable event:
This is the console.log of the Blob:
The fiddle
Here is the JSFiddle. You can check the results in the console:
https://jsfiddle.net/1b7v2pen/
Browsers behavior
This behavior (Blob data size: 0) it happens on Chrome and Opera.
On Firefox it behaves slightly different.
It records a very small video Blob (725 bytes). The video length is 5 seconds as it should be, but it's just a black screen.
The question
What is the proper way to the record a stream from a canvas?
Is there something wrong in the code?
Why did the Blob come out empty?
MediaRecorder.stop() is kind of an asynchronous method.
In the stop algorithm, there is a call to requestData, which itself will queue a task to fire an event dataavailable with the currently available data since the last such event.
This means that synchronously after you called MediaRecorder#stop() the last data grabbed will not be part of your allChunks Array yet. They will become not so long after (normally in the same event loop).
So, when you are about to save recordings made from a MediaRecorder, be sure to always build the final Blob from the MediaRecorder's onstop event, which will signal that the MediaRecorder is actually ended, did fire its last dataavailable event, and that everything is all good.
And one thing I missed at first, is that you are requesting a cross-domain video. Doing so, without the correct cross-origin request, will make your canvas (and MediaElement) tainted, so your MediaStream will be muted.
Since the video you are trying to request is from wikimedia, you can simply request it as a cross-origin resource, but for other resources, you'll have to be sure the server is configured to allow these requests.
const canvas = document.querySelector("canvas")
const ctx = canvas.getContext("2d")
const video = document.querySelector("video")
// Start the video in the player
video.play()
// On play event - draw the video in the canvas
video.addEventListener('play', () => {
function step() {
ctx.drawImage(video, 0, 0, canvas.width, canvas.height)
requestAnimationFrame(step)
}
requestAnimationFrame(step);
// Init stream and recorder
const stream = canvas.captureStream()
const recorder = new MediaRecorder(stream, {
mimeType: 'video/webm',
});
// Get the blob data when is available
let allChunks = [];
recorder.ondataavailable = function(e) {
allChunks.push(e.data);
}
recorder.onstop = (e) => {
const fullBlob = new Blob(allChunks, { 'type' : 'video/webm' });
const downloadUrl = window.URL.createObjectURL(fullBlob)
console.log({fullBlob})
console.log({downloadUrl})
}
// Start to record
recorder.start()
// Stop the recorder after 5s and check the result
setTimeout(() => {
recorder.stop()
}, 5000);
})
<!--add the 'crossorigin' attribute to your video -->
<video id="video" controls="true" src="https://upload.wikimedia.org/wikipedia/commons/7/79/Big_Buck_Bunny_small.ogv" crossorigin="anonymous"></video>
<canvas id="myCanvas" width="532" height="300"></canvas>
Also, I can't refrain to note that if you don't do any special drawings from your canvas, you might want to save the video source directly, or at least, record the <video>'s captureStream MediaStream directly.
I have one video of duration 9200 ms, and a canvas displaying user's webcam video. I'm aiming to record the webcam video while the original video plays to create an output blob of the exact same duration with MediaRecorder but seem to always get a video with longer length (typically around 9400ms).
I've found that if I take the difference in durations and skip ahead in the output video by this amount it will basically sync up with the original video, but I'm hoping to not have to use this hack. Knowing this, I assumed the difference was because HTML5 video's play() function is asynchronous, but even calling recorder.start() inside a .then() after the play() promise still results in an output blob with longer duration.
I start() the MediaRecorder after play()ing the original video, and call stop() inside a requestAnimationFrame loop when I see that the original video has ended. Changing the MediaRecorder.start() to begin in the requestAnimationFrame loop only after checking the original video is playing also results in a longer output blob.
What might be the reason for the longer output? From the documentation it doesn't appear that MediaRecorder's start or stop functions are asynchronous, so is there some way to guarantee an exact starting time with HTML5 video and MediaRecorder?
Yes start() and stop() are async, that's why we have onstart and onstop events firing:
const stream = makeEmptyStream();
const rec = new MediaRecorder(stream);
rec.onstart = (evt) => { console.log( "took %sms to start", performance.now() - begin ); };
const begin = performance.now();
rec.start();
setTimeout( () => {
rec.onstop = (evt) => { console.log( "took %sms to stop", performance.now() - begin ); };
const begin = performance.now();
rec.stop();
}, 1000 );
function makeEmptyStream() {
const canvas = document.createElement('canvas');
canvas.getContext('2d').fillRect(0,0,1,1);
return canvas.captureStream();
}
You can thus try to pause your video after it's been readied to play, then wait your recorder starts before starting again the playback of the video.
However, given everything in both the HTMLMediaElement and MediaRecorder is async, there is no way to get a perfect 1 to 1 relation...
const vid = document.querySelector('video');
onclick = (evt) => {
onclick = null;
vid.play().then( () => {
// pause immediately the main video
vid.pause();
// we may have advanced of a few µs already, so go back to beginning
vid.currentTime = 0;
// only when we're back to beginning
vid.onseeked = (evt) => {
console.log( 'recording will begin shortly, please wait until the end of the video' );
console.log( 'original %ss', vid.duration );
const stream = vid.captureStream ? vid.captureStream() : vid.mozCaptureStream();
const chunks = [];
const rec = new MediaRecorder( stream );
rec.ondataavailable = (evt) => {
chunks.push( evt.data );
};
rec.onstop = (evt) => {
logVideoDuration( new Blob( chunks ), "recorded %ss" );
};
vid.onended = (evt) => {
rec.stop();
};
// wait until the recorder is ready before playing the video again
rec.onstart = (evt) => {
vid.play();
};
rec.start();
};
} );
function logVideoDuration( blob, name ) {
const el = document.createElement('video');
el.src = URL.createObjectURL( blob );
el.play().then( () => {
el.pause();
el.onseeked = (evt) => console.log( name, el.duration );
el.currentTime = 10e25;
} );
}
};
video { pointer-events: none; width: 100% }
click to start<br>
<video src="https://upload.wikimedia.org/wikipedia/commons/a/a4/BBH_gravitational_lensing_of_gw150914.webm" controls crossorigin></video>
Also note that there might be some discrepancy in the duration declared by your media, the calculated duration of the recorded media, and their actual duration. Indeed, these durations are often only a value hard-coded in the metadata of the files, but given how the MediaRecorder API works, it's hard to do this there, so for instance Chrome will produce files without duration, and the players will try to approximate that duration based on the last point they can seek in the media.
The situation
I need to do the following:
Get the video from a <video> and play inside a <canvas>
Record the stream from the canvas as a Blob
That's it. The first part is okay.
For the second part, I managed to record a Blob. The problem is that the Blob is empty.
The view
<video id="video" controls="true" src="http://upload.wikimedia.org/wikipedia/commons/7/79/Big_Buck_Bunny_small.ogv"></video>
<canvas id="myCanvas" width="532" height="300"></canvas>
The code
// Init
console.log(MediaRecorder.isTypeSupported('video/webm')) // true
const canvas = document.querySelector("canvas")
const ctx = canvas.getContext("2d")
const video = document.querySelector("video")
// Start the video in the player
video.play()
// On play event - draw the video in the canvas
video.addEventListener('play', () => {
function step() {
ctx.drawImage(video, 0, 0, canvas.width, canvas.height)
requestAnimationFrame(step)
}
requestAnimationFrame(step);
// Init stream and recorder
const stream = canvas.captureStream()
const recorder = new MediaRecorder(stream, {
mimeType: 'video/webm',
});
// Get the blob data when is available
let allChunks = [];
recorder.ondataavailable = function(e) {
console.log({e}) // img1
allChunks.push(e.data);
}
// Start to record
recorder.start()
// Stop the recorder after 5s and check the result
setTimeout(() => {
recorder.stop()
const fullBlob = new Blob(allChunks, { 'type' : 'video/webm' });
const downloadUrl = window.URL.createObjectURL(fullBlob)
console.log({fullBlob}) // img2
}, 5000);
})
The result
This the console.log of the ondataavailable event:
This is the console.log of the Blob:
The fiddle
Here is the JSFiddle. You can check the results in the console:
https://jsfiddle.net/1b7v2pen/
Browsers behavior
This behavior (Blob data size: 0) it happens on Chrome and Opera.
On Firefox it behaves slightly different.
It records a very small video Blob (725 bytes). The video length is 5 seconds as it should be, but it's just a black screen.
The question
What is the proper way to the record a stream from a canvas?
Is there something wrong in the code?
Why did the Blob come out empty?
MediaRecorder.stop() is kind of an asynchronous method.
In the stop algorithm, there is a call to requestData, which itself will queue a task to fire an event dataavailable with the currently available data since the last such event.
This means that synchronously after you called MediaRecorder#stop() the last data grabbed will not be part of your allChunks Array yet. They will become not so long after (normally in the same event loop).
So, when you are about to save recordings made from a MediaRecorder, be sure to always build the final Blob from the MediaRecorder's onstop event, which will signal that the MediaRecorder is actually ended, did fire its last dataavailable event, and that everything is all good.
And one thing I missed at first, is that you are requesting a cross-domain video. Doing so, without the correct cross-origin request, will make your canvas (and MediaElement) tainted, so your MediaStream will be muted.
Since the video you are trying to request is from wikimedia, you can simply request it as a cross-origin resource, but for other resources, you'll have to be sure the server is configured to allow these requests.
const canvas = document.querySelector("canvas")
const ctx = canvas.getContext("2d")
const video = document.querySelector("video")
// Start the video in the player
video.play()
// On play event - draw the video in the canvas
video.addEventListener('play', () => {
function step() {
ctx.drawImage(video, 0, 0, canvas.width, canvas.height)
requestAnimationFrame(step)
}
requestAnimationFrame(step);
// Init stream and recorder
const stream = canvas.captureStream()
const recorder = new MediaRecorder(stream, {
mimeType: 'video/webm',
});
// Get the blob data when is available
let allChunks = [];
recorder.ondataavailable = function(e) {
allChunks.push(e.data);
}
recorder.onstop = (e) => {
const fullBlob = new Blob(allChunks, { 'type' : 'video/webm' });
const downloadUrl = window.URL.createObjectURL(fullBlob)
console.log({fullBlob})
console.log({downloadUrl})
}
// Start to record
recorder.start()
// Stop the recorder after 5s and check the result
setTimeout(() => {
recorder.stop()
}, 5000);
})
<!--add the 'crossorigin' attribute to your video -->
<video id="video" controls="true" src="https://upload.wikimedia.org/wikipedia/commons/7/79/Big_Buck_Bunny_small.ogv" crossorigin="anonymous"></video>
<canvas id="myCanvas" width="532" height="300"></canvas>
Also, I can't refrain to note that if you don't do any special drawings from your canvas, you might want to save the video source directly, or at least, record the <video>'s captureStream MediaStream directly.
The situation
I need to do the following:
Get the video from a <video> and play inside a <canvas>
Record the stream from the canvas as a Blob
That's it. The first part is okay.
For the second part, I managed to record a Blob. The problem is that the Blob is empty.
The view
<video id="video" controls="true" src="http://upload.wikimedia.org/wikipedia/commons/7/79/Big_Buck_Bunny_small.ogv"></video>
<canvas id="myCanvas" width="532" height="300"></canvas>
The code
// Init
console.log(MediaRecorder.isTypeSupported('video/webm')) // true
const canvas = document.querySelector("canvas")
const ctx = canvas.getContext("2d")
const video = document.querySelector("video")
// Start the video in the player
video.play()
// On play event - draw the video in the canvas
video.addEventListener('play', () => {
function step() {
ctx.drawImage(video, 0, 0, canvas.width, canvas.height)
requestAnimationFrame(step)
}
requestAnimationFrame(step);
// Init stream and recorder
const stream = canvas.captureStream()
const recorder = new MediaRecorder(stream, {
mimeType: 'video/webm',
});
// Get the blob data when is available
let allChunks = [];
recorder.ondataavailable = function(e) {
console.log({e}) // img1
allChunks.push(e.data);
}
// Start to record
recorder.start()
// Stop the recorder after 5s and check the result
setTimeout(() => {
recorder.stop()
const fullBlob = new Blob(allChunks, { 'type' : 'video/webm' });
const downloadUrl = window.URL.createObjectURL(fullBlob)
console.log({fullBlob}) // img2
}, 5000);
})
The result
This the console.log of the ondataavailable event:
This is the console.log of the Blob:
The fiddle
Here is the JSFiddle. You can check the results in the console:
https://jsfiddle.net/1b7v2pen/
Browsers behavior
This behavior (Blob data size: 0) it happens on Chrome and Opera.
On Firefox it behaves slightly different.
It records a very small video Blob (725 bytes). The video length is 5 seconds as it should be, but it's just a black screen.
The question
What is the proper way to the record a stream from a canvas?
Is there something wrong in the code?
Why did the Blob come out empty?
MediaRecorder.stop() is kind of an asynchronous method.
In the stop algorithm, there is a call to requestData, which itself will queue a task to fire an event dataavailable with the currently available data since the last such event.
This means that synchronously after you called MediaRecorder#stop() the last data grabbed will not be part of your allChunks Array yet. They will become not so long after (normally in the same event loop).
So, when you are about to save recordings made from a MediaRecorder, be sure to always build the final Blob from the MediaRecorder's onstop event, which will signal that the MediaRecorder is actually ended, did fire its last dataavailable event, and that everything is all good.
And one thing I missed at first, is that you are requesting a cross-domain video. Doing so, without the correct cross-origin request, will make your canvas (and MediaElement) tainted, so your MediaStream will be muted.
Since the video you are trying to request is from wikimedia, you can simply request it as a cross-origin resource, but for other resources, you'll have to be sure the server is configured to allow these requests.
const canvas = document.querySelector("canvas")
const ctx = canvas.getContext("2d")
const video = document.querySelector("video")
// Start the video in the player
video.play()
// On play event - draw the video in the canvas
video.addEventListener('play', () => {
function step() {
ctx.drawImage(video, 0, 0, canvas.width, canvas.height)
requestAnimationFrame(step)
}
requestAnimationFrame(step);
// Init stream and recorder
const stream = canvas.captureStream()
const recorder = new MediaRecorder(stream, {
mimeType: 'video/webm',
});
// Get the blob data when is available
let allChunks = [];
recorder.ondataavailable = function(e) {
allChunks.push(e.data);
}
recorder.onstop = (e) => {
const fullBlob = new Blob(allChunks, { 'type' : 'video/webm' });
const downloadUrl = window.URL.createObjectURL(fullBlob)
console.log({fullBlob})
console.log({downloadUrl})
}
// Start to record
recorder.start()
// Stop the recorder after 5s and check the result
setTimeout(() => {
recorder.stop()
}, 5000);
})
<!--add the 'crossorigin' attribute to your video -->
<video id="video" controls="true" src="https://upload.wikimedia.org/wikipedia/commons/7/79/Big_Buck_Bunny_small.ogv" crossorigin="anonymous"></video>
<canvas id="myCanvas" width="532" height="300"></canvas>
Also, I can't refrain to note that if you don't do any special drawings from your canvas, you might want to save the video source directly, or at least, record the <video>'s captureStream MediaStream directly.