The situation
I need to do the following:
Get the video from a <video> and play inside a <canvas>
Record the stream from the canvas as a Blob
That's it. The first part is okay.
For the second part, I managed to record a Blob. The problem is that the Blob is empty.
The view
<video id="video" controls="true" src="http://upload.wikimedia.org/wikipedia/commons/7/79/Big_Buck_Bunny_small.ogv"></video>
<canvas id="myCanvas" width="532" height="300"></canvas>
The code
// Init
console.log(MediaRecorder.isTypeSupported('video/webm')) // true
const canvas = document.querySelector("canvas")
const ctx = canvas.getContext("2d")
const video = document.querySelector("video")
// Start the video in the player
video.play()
// On play event - draw the video in the canvas
video.addEventListener('play', () => {
function step() {
ctx.drawImage(video, 0, 0, canvas.width, canvas.height)
requestAnimationFrame(step)
}
requestAnimationFrame(step);
// Init stream and recorder
const stream = canvas.captureStream()
const recorder = new MediaRecorder(stream, {
mimeType: 'video/webm',
});
// Get the blob data when is available
let allChunks = [];
recorder.ondataavailable = function(e) {
console.log({e}) // img1
allChunks.push(e.data);
}
// Start to record
recorder.start()
// Stop the recorder after 5s and check the result
setTimeout(() => {
recorder.stop()
const fullBlob = new Blob(allChunks, { 'type' : 'video/webm' });
const downloadUrl = window.URL.createObjectURL(fullBlob)
console.log({fullBlob}) // img2
}, 5000);
})
The result
This the console.log of the ondataavailable event:
This is the console.log of the Blob:
The fiddle
Here is the JSFiddle. You can check the results in the console:
https://jsfiddle.net/1b7v2pen/
Browsers behavior
This behavior (Blob data size: 0) it happens on Chrome and Opera.
On Firefox it behaves slightly different.
It records a very small video Blob (725 bytes). The video length is 5 seconds as it should be, but it's just a black screen.
The question
What is the proper way to the record a stream from a canvas?
Is there something wrong in the code?
Why did the Blob come out empty?
MediaRecorder.stop() is kind of an asynchronous method.
In the stop algorithm, there is a call to requestData, which itself will queue a task to fire an event dataavailable with the currently available data since the last such event.
This means that synchronously after you called MediaRecorder#stop() the last data grabbed will not be part of your allChunks Array yet. They will become not so long after (normally in the same event loop).
So, when you are about to save recordings made from a MediaRecorder, be sure to always build the final Blob from the MediaRecorder's onstop event, which will signal that the MediaRecorder is actually ended, did fire its last dataavailable event, and that everything is all good.
And one thing I missed at first, is that you are requesting a cross-domain video. Doing so, without the correct cross-origin request, will make your canvas (and MediaElement) tainted, so your MediaStream will be muted.
Since the video you are trying to request is from wikimedia, you can simply request it as a cross-origin resource, but for other resources, you'll have to be sure the server is configured to allow these requests.
const canvas = document.querySelector("canvas")
const ctx = canvas.getContext("2d")
const video = document.querySelector("video")
// Start the video in the player
video.play()
// On play event - draw the video in the canvas
video.addEventListener('play', () => {
function step() {
ctx.drawImage(video, 0, 0, canvas.width, canvas.height)
requestAnimationFrame(step)
}
requestAnimationFrame(step);
// Init stream and recorder
const stream = canvas.captureStream()
const recorder = new MediaRecorder(stream, {
mimeType: 'video/webm',
});
// Get the blob data when is available
let allChunks = [];
recorder.ondataavailable = function(e) {
allChunks.push(e.data);
}
recorder.onstop = (e) => {
const fullBlob = new Blob(allChunks, { 'type' : 'video/webm' });
const downloadUrl = window.URL.createObjectURL(fullBlob)
console.log({fullBlob})
console.log({downloadUrl})
}
// Start to record
recorder.start()
// Stop the recorder after 5s and check the result
setTimeout(() => {
recorder.stop()
}, 5000);
})
<!--add the 'crossorigin' attribute to your video -->
<video id="video" controls="true" src="https://upload.wikimedia.org/wikipedia/commons/7/79/Big_Buck_Bunny_small.ogv" crossorigin="anonymous"></video>
<canvas id="myCanvas" width="532" height="300"></canvas>
Also, I can't refrain to note that if you don't do any special drawings from your canvas, you might want to save the video source directly, or at least, record the <video>'s captureStream MediaStream directly.
Related
The following javascript code records a sound and generates blob with audio every 0,5 second.
After recording has stopped the program plays 1-st blob - data[0].
I need the audio player to fire event after data[0] has played, and event handler will deliver the next portion to the audio player - data[1] (далее - data[2], data[3] etc.).
How can I modify the code and which objects should I use to do this ?
I know that I could pass all data[] array to the audio player, but I need a mechanism allowing the audio player to request next portions using events.
navigator.mediaDevices.getUserMedia({audio:true})
.then(function onSuccess(stream) {
const recorder = new MediaRecorder(stream);
const data = [];
recorder.ondataavailable = (e) => {
data.push(e.data);
};
recorder.start(500); // willfire 'dataavailable ' event every 0,5 second
recorder.onstop = (e) => {
const audio = document.createElement('audio');
audio.src = window.URL.createObjectURL(new Blob( data[0] ));
}
setTimeout(() => {
rec.stop();
}, 5000);
})
.catch(function onError(error) {
console.log(error.message);
});
I guess that's what your looking for ?
navigator.mediaDevices
.getUserMedia({ audio: true })
.then(function onSuccess(stream) {
// create the audio stream
const audio = document.createElement('audio');
audio.srcObject = stream; // Pass the audio stream
audio.controls = true;
audio.play();
document.body.appendChild(audio);
const recorder = new MediaRecorder(stream);
const data = [];
// Set event listener
// ondataavailable will fire when you request stop(), requestData() or after all timeSlice you give to the start function.
recorder.ondataavailable = e => data.push(e.data);
// Start recording
// Will generate blob every 500ms
recorder.start(500);
})
.catch(function onError(error) {
console.log(error.message);
});
You had some mistakes to correct :
When recorder call start event wich timeslice parameters, that will not fire the ondataavailable event. You need to stop the recorder to fire the event and create the blob.
You make a mistake on the recorder name's variable and the time on the settimeout function.
You recreate a audio player all times the recorder stop and never append it on the DOM.
The situation
I need to do the following:
Get the video from a <video> and play inside a <canvas>
Record the stream from the canvas as a Blob
That's it. The first part is okay.
For the second part, I managed to record a Blob. The problem is that the Blob is empty.
The view
<video id="video" controls="true" src="http://upload.wikimedia.org/wikipedia/commons/7/79/Big_Buck_Bunny_small.ogv"></video>
<canvas id="myCanvas" width="532" height="300"></canvas>
The code
// Init
console.log(MediaRecorder.isTypeSupported('video/webm')) // true
const canvas = document.querySelector("canvas")
const ctx = canvas.getContext("2d")
const video = document.querySelector("video")
// Start the video in the player
video.play()
// On play event - draw the video in the canvas
video.addEventListener('play', () => {
function step() {
ctx.drawImage(video, 0, 0, canvas.width, canvas.height)
requestAnimationFrame(step)
}
requestAnimationFrame(step);
// Init stream and recorder
const stream = canvas.captureStream()
const recorder = new MediaRecorder(stream, {
mimeType: 'video/webm',
});
// Get the blob data when is available
let allChunks = [];
recorder.ondataavailable = function(e) {
console.log({e}) // img1
allChunks.push(e.data);
}
// Start to record
recorder.start()
// Stop the recorder after 5s and check the result
setTimeout(() => {
recorder.stop()
const fullBlob = new Blob(allChunks, { 'type' : 'video/webm' });
const downloadUrl = window.URL.createObjectURL(fullBlob)
console.log({fullBlob}) // img2
}, 5000);
})
The result
This the console.log of the ondataavailable event:
This is the console.log of the Blob:
The fiddle
Here is the JSFiddle. You can check the results in the console:
https://jsfiddle.net/1b7v2pen/
Browsers behavior
This behavior (Blob data size: 0) it happens on Chrome and Opera.
On Firefox it behaves slightly different.
It records a very small video Blob (725 bytes). The video length is 5 seconds as it should be, but it's just a black screen.
The question
What is the proper way to the record a stream from a canvas?
Is there something wrong in the code?
Why did the Blob come out empty?
MediaRecorder.stop() is kind of an asynchronous method.
In the stop algorithm, there is a call to requestData, which itself will queue a task to fire an event dataavailable with the currently available data since the last such event.
This means that synchronously after you called MediaRecorder#stop() the last data grabbed will not be part of your allChunks Array yet. They will become not so long after (normally in the same event loop).
So, when you are about to save recordings made from a MediaRecorder, be sure to always build the final Blob from the MediaRecorder's onstop event, which will signal that the MediaRecorder is actually ended, did fire its last dataavailable event, and that everything is all good.
And one thing I missed at first, is that you are requesting a cross-domain video. Doing so, without the correct cross-origin request, will make your canvas (and MediaElement) tainted, so your MediaStream will be muted.
Since the video you are trying to request is from wikimedia, you can simply request it as a cross-origin resource, but for other resources, you'll have to be sure the server is configured to allow these requests.
const canvas = document.querySelector("canvas")
const ctx = canvas.getContext("2d")
const video = document.querySelector("video")
// Start the video in the player
video.play()
// On play event - draw the video in the canvas
video.addEventListener('play', () => {
function step() {
ctx.drawImage(video, 0, 0, canvas.width, canvas.height)
requestAnimationFrame(step)
}
requestAnimationFrame(step);
// Init stream and recorder
const stream = canvas.captureStream()
const recorder = new MediaRecorder(stream, {
mimeType: 'video/webm',
});
// Get the blob data when is available
let allChunks = [];
recorder.ondataavailable = function(e) {
allChunks.push(e.data);
}
recorder.onstop = (e) => {
const fullBlob = new Blob(allChunks, { 'type' : 'video/webm' });
const downloadUrl = window.URL.createObjectURL(fullBlob)
console.log({fullBlob})
console.log({downloadUrl})
}
// Start to record
recorder.start()
// Stop the recorder after 5s and check the result
setTimeout(() => {
recorder.stop()
}, 5000);
})
<!--add the 'crossorigin' attribute to your video -->
<video id="video" controls="true" src="https://upload.wikimedia.org/wikipedia/commons/7/79/Big_Buck_Bunny_small.ogv" crossorigin="anonymous"></video>
<canvas id="myCanvas" width="532" height="300"></canvas>
Also, I can't refrain to note that if you don't do any special drawings from your canvas, you might want to save the video source directly, or at least, record the <video>'s captureStream MediaStream directly.
It is known that iOS Safari does not support canvas.captureStream() to (e.g.) pipe its content into a video element, see this demo not working in iOS Safari.
However, canvas.captureStream() is a valid function in iOS Safari, and correctly returns a CanvasCaptureMediaStreamTrack, it just doesn't function as intended. In order to detect browsers that don't support canvas.captureStream, it would have been easy to do a test typeof canvas.captureStream === 'function', but at least for iOS Safari, we can't rely on that. Neither can we rely on the type of the returned value.
How do I write JavaScript that detects whether the current browser effectively supports canvas.captureStream()?
No iOS to test it here, but according to the comments on the issue you linked to, captureStream() actually works, what doesn't is the HTMLVideoElement's reading of this MediaStream. So that's what you actually want to test.
According to the messages there, the video doesn't even fail to load the video (i.e the metadata are correctly set and I don't expect events like error to fire, though if it does, then it's quite simple to test: check if a video is able to play such a MediaStream.
function testReadingOfCanvasCapturedStream() {
// first check the DOM API is available
if( !testSupportOfCanvasCapureStream() ) {
return Promise.resolve(false);
}
// create a test canvas
const canvas = document.createElement("canvas");
// we need to init a context on the canvas
const ctx = canvas.getContext("2d");
const stream = canvas.captureStream();
const vid = document.createElement("video");
vid.muted = true;
vid.playsInline = true;
vid.srcObject = stream;
let supports = false;
// Safari needs we draw on the canvas
// asynchronously after we requested the MediaStream
setTimeout(() => ctx.fillRect(0,0,5,5));
// if it failed, .play() would be enough
// but according to the comments on the issue, it isn't
return vid.play()
.then(() => supports = true)
.finally(() => {
// clean
stream.getTracks().forEach(track => track.stop());
return supports;
});
}
function testSupportOfCanvasCapureStream() {
return "function" === typeof HTMLCanvasElement.prototype.captureStream;
}
testReadingOfCanvasCapturedStream()
.then(supports => console.log(supports));
But if the video is able to play, but no images is painted, then we have to go a bit deeper and check what has been painted on the video. To do this, we'll draw some color on the canvas, wait for the video to have loaded and draw it back on the canvas before checking the color of the frame on the canvas:
async function testReadingOfCanvasCapturedStream() {
// first check the DOM API is available
if( !testSupportOfCanvasCapureStream() ) {
return false;
}
// create a test canvas
const canvas = document.createElement("canvas");
// we need to init a context on the canvas
const ctx = canvas.getContext("2d");
const stream = canvas.captureStream();
const clean = () => stream.getTracks().forEach(track => track.stop());
const vid = document.createElement("video");
vid.muted = true;
vid.srcObject = stream;
// Safari needs we draw on the canvas
// asynchronously after we requested the MediaStream
setTimeout(() => {
// we draw in a well knwown color
ctx.fillStyle = "#FF0000";
ctx.fillRect(0,0,300,150);
});
try {
await vid.play();
}
catch(e) {
// failed to load, no need to go deeper
// it's not supported
clean();
return false;
}
// here we should have our canvas painted on the video
// let's keep this image on the video
await vid.pause();
// now draw it back on the canvas
ctx.clearRect(0,0,300,150);
ctx.drawImage(vid,0,0);
const pixel_data = ctx.getImageData(5,5,1,1).data;
const red_channel = pixel_data[0];
clean();
return red_channel > 0; // it has red
}
function testSupportOfCanvasCapureStream() {
return "function" === typeof HTMLCanvasElement.prototype.captureStream;
}
testReadingOfCanvasCapturedStream()
.then(supports => console.log(supports));
I have one video of duration 9200 ms, and a canvas displaying user's webcam video. I'm aiming to record the webcam video while the original video plays to create an output blob of the exact same duration with MediaRecorder but seem to always get a video with longer length (typically around 9400ms).
I've found that if I take the difference in durations and skip ahead in the output video by this amount it will basically sync up with the original video, but I'm hoping to not have to use this hack. Knowing this, I assumed the difference was because HTML5 video's play() function is asynchronous, but even calling recorder.start() inside a .then() after the play() promise still results in an output blob with longer duration.
I start() the MediaRecorder after play()ing the original video, and call stop() inside a requestAnimationFrame loop when I see that the original video has ended. Changing the MediaRecorder.start() to begin in the requestAnimationFrame loop only after checking the original video is playing also results in a longer output blob.
What might be the reason for the longer output? From the documentation it doesn't appear that MediaRecorder's start or stop functions are asynchronous, so is there some way to guarantee an exact starting time with HTML5 video and MediaRecorder?
Yes start() and stop() are async, that's why we have onstart and onstop events firing:
const stream = makeEmptyStream();
const rec = new MediaRecorder(stream);
rec.onstart = (evt) => { console.log( "took %sms to start", performance.now() - begin ); };
const begin = performance.now();
rec.start();
setTimeout( () => {
rec.onstop = (evt) => { console.log( "took %sms to stop", performance.now() - begin ); };
const begin = performance.now();
rec.stop();
}, 1000 );
function makeEmptyStream() {
const canvas = document.createElement('canvas');
canvas.getContext('2d').fillRect(0,0,1,1);
return canvas.captureStream();
}
You can thus try to pause your video after it's been readied to play, then wait your recorder starts before starting again the playback of the video.
However, given everything in both the HTMLMediaElement and MediaRecorder is async, there is no way to get a perfect 1 to 1 relation...
const vid = document.querySelector('video');
onclick = (evt) => {
onclick = null;
vid.play().then( () => {
// pause immediately the main video
vid.pause();
// we may have advanced of a few µs already, so go back to beginning
vid.currentTime = 0;
// only when we're back to beginning
vid.onseeked = (evt) => {
console.log( 'recording will begin shortly, please wait until the end of the video' );
console.log( 'original %ss', vid.duration );
const stream = vid.captureStream ? vid.captureStream() : vid.mozCaptureStream();
const chunks = [];
const rec = new MediaRecorder( stream );
rec.ondataavailable = (evt) => {
chunks.push( evt.data );
};
rec.onstop = (evt) => {
logVideoDuration( new Blob( chunks ), "recorded %ss" );
};
vid.onended = (evt) => {
rec.stop();
};
// wait until the recorder is ready before playing the video again
rec.onstart = (evt) => {
vid.play();
};
rec.start();
};
} );
function logVideoDuration( blob, name ) {
const el = document.createElement('video');
el.src = URL.createObjectURL( blob );
el.play().then( () => {
el.pause();
el.onseeked = (evt) => console.log( name, el.duration );
el.currentTime = 10e25;
} );
}
};
video { pointer-events: none; width: 100% }
click to start<br>
<video src="https://upload.wikimedia.org/wikipedia/commons/a/a4/BBH_gravitational_lensing_of_gw150914.webm" controls crossorigin></video>
Also note that there might be some discrepancy in the duration declared by your media, the calculated duration of the recorded media, and their actual duration. Indeed, these durations are often only a value hard-coded in the metadata of the files, but given how the MediaRecorder API works, it's hard to do this there, so for instance Chrome will produce files without duration, and the players will try to approximate that duration based on the last point they can seek in the media.
The situation
I need to do the following:
Get the video from a <video> and play inside a <canvas>
Record the stream from the canvas as a Blob
That's it. The first part is okay.
For the second part, I managed to record a Blob. The problem is that the Blob is empty.
The view
<video id="video" controls="true" src="http://upload.wikimedia.org/wikipedia/commons/7/79/Big_Buck_Bunny_small.ogv"></video>
<canvas id="myCanvas" width="532" height="300"></canvas>
The code
// Init
console.log(MediaRecorder.isTypeSupported('video/webm')) // true
const canvas = document.querySelector("canvas")
const ctx = canvas.getContext("2d")
const video = document.querySelector("video")
// Start the video in the player
video.play()
// On play event - draw the video in the canvas
video.addEventListener('play', () => {
function step() {
ctx.drawImage(video, 0, 0, canvas.width, canvas.height)
requestAnimationFrame(step)
}
requestAnimationFrame(step);
// Init stream and recorder
const stream = canvas.captureStream()
const recorder = new MediaRecorder(stream, {
mimeType: 'video/webm',
});
// Get the blob data when is available
let allChunks = [];
recorder.ondataavailable = function(e) {
console.log({e}) // img1
allChunks.push(e.data);
}
// Start to record
recorder.start()
// Stop the recorder after 5s and check the result
setTimeout(() => {
recorder.stop()
const fullBlob = new Blob(allChunks, { 'type' : 'video/webm' });
const downloadUrl = window.URL.createObjectURL(fullBlob)
console.log({fullBlob}) // img2
}, 5000);
})
The result
This the console.log of the ondataavailable event:
This is the console.log of the Blob:
The fiddle
Here is the JSFiddle. You can check the results in the console:
https://jsfiddle.net/1b7v2pen/
Browsers behavior
This behavior (Blob data size: 0) it happens on Chrome and Opera.
On Firefox it behaves slightly different.
It records a very small video Blob (725 bytes). The video length is 5 seconds as it should be, but it's just a black screen.
The question
What is the proper way to the record a stream from a canvas?
Is there something wrong in the code?
Why did the Blob come out empty?
MediaRecorder.stop() is kind of an asynchronous method.
In the stop algorithm, there is a call to requestData, which itself will queue a task to fire an event dataavailable with the currently available data since the last such event.
This means that synchronously after you called MediaRecorder#stop() the last data grabbed will not be part of your allChunks Array yet. They will become not so long after (normally in the same event loop).
So, when you are about to save recordings made from a MediaRecorder, be sure to always build the final Blob from the MediaRecorder's onstop event, which will signal that the MediaRecorder is actually ended, did fire its last dataavailable event, and that everything is all good.
And one thing I missed at first, is that you are requesting a cross-domain video. Doing so, without the correct cross-origin request, will make your canvas (and MediaElement) tainted, so your MediaStream will be muted.
Since the video you are trying to request is from wikimedia, you can simply request it as a cross-origin resource, but for other resources, you'll have to be sure the server is configured to allow these requests.
const canvas = document.querySelector("canvas")
const ctx = canvas.getContext("2d")
const video = document.querySelector("video")
// Start the video in the player
video.play()
// On play event - draw the video in the canvas
video.addEventListener('play', () => {
function step() {
ctx.drawImage(video, 0, 0, canvas.width, canvas.height)
requestAnimationFrame(step)
}
requestAnimationFrame(step);
// Init stream and recorder
const stream = canvas.captureStream()
const recorder = new MediaRecorder(stream, {
mimeType: 'video/webm',
});
// Get the blob data when is available
let allChunks = [];
recorder.ondataavailable = function(e) {
allChunks.push(e.data);
}
recorder.onstop = (e) => {
const fullBlob = new Blob(allChunks, { 'type' : 'video/webm' });
const downloadUrl = window.URL.createObjectURL(fullBlob)
console.log({fullBlob})
console.log({downloadUrl})
}
// Start to record
recorder.start()
// Stop the recorder after 5s and check the result
setTimeout(() => {
recorder.stop()
}, 5000);
})
<!--add the 'crossorigin' attribute to your video -->
<video id="video" controls="true" src="https://upload.wikimedia.org/wikipedia/commons/7/79/Big_Buck_Bunny_small.ogv" crossorigin="anonymous"></video>
<canvas id="myCanvas" width="532" height="300"></canvas>
Also, I can't refrain to note that if you don't do any special drawings from your canvas, you might want to save the video source directly, or at least, record the <video>'s captureStream MediaStream directly.