I've been trying to record a video of an HTML Canvas animation using the CCapture.js project. Specifically, it has a webm capture option which uses whammy.js under the hood. I've tinkered with many of the parameters, but so far I've only had it output a video of the correct length, but without any of the canvas elements - it just looks black.
Here is a jsfiddle illustrating the issue. It creates an animation and records 2 seconds, then saves/downloads the webm output.
Here is the CCapture setup I've got in the jsfiddle:
capturer = new CCapture({
display: true,
framerate: 30,
format: 'webm',
verbose: true,
});
// start capture
capturer.start();
// end capture and save
setTimeout(function() {
capturer.stop();
capturer.save();
}, 2000);
Turns out it was a bug in Whammy.js, an issue with parsing webp. After applying the fix, it is recording correctly again!
Related
Today I upgraded to macOS Big Sur 11.0.1 and Safari 14, and my website (one-to-one video chat based on WebRTC) stopped working on Safari. After 10 seconds of a video call, the following console error appears: "A MediaStreamTrack ended due to a capture failure" and the other person can no longer see the video.
My code looks like this:
const userMedia = await navigator.mediaDevices.getUserMedia({
video: true,
audio: true,
});
if (userMedia != null) {
userMedia.getTracks().forEach((track) => {
otherRtcPeer.addTrack(track, userMedia);
});
}
Is it a Safari bug or an implementation issue? And how to solve it?
After going through this guide, I have made changes and resolved the issue.
Have stream object in react state
when video element render/re-render the stream object is cloned and assigned to video element srcObject
After capture of the picture, stop all media tracks in the stream.
This way error mentioned above has been overcome.
I was able to fix the issue on my end by styling a video element I'm using as a webGL texture as display:block, opacity:0 (instead of display: none).
Perhaps they removed the ability to play offscreen video textures on ios14/big sur.
im trying to create a video conference web app..
the problem is im trying to disable my camera on the middle conference, its work but my laptop camera indicator still on (the light is on) but on my web, video show blank screen, is that normal or i miss something?
here what i try
videoAction() {
navigator.mediaDevices.getUserMedia({
video: true,
audio: true
}).then(stream => {
this.myStream = stream
})
this.myStream.getVideoTracks()[0].enabled = !(this.myStream.getVideoTracks()[0].enabled)
this.mediaStatus.video = this.myStream.getVideoTracks()[0].enabled
}
There is also a stop() method which should do the trick in Chrome and Safari. Firefox should already mark the camera as unused by setting the enabled property.
this.myStream.getVideoTracks()[0].stop();
Firstly, MediaStreamTrack.enabled is a Boolean, so you can simply assign the value false.
To simplify your code, you might call:
var vidTrack = myStream.getVideoTracks();
vidTrack.forEach(track => track.enabled = false);
When MediaStreamTrack.enabled = false, the track passes empty frames to the stream, which is why black video is sent. The camera/source itself is not stopped-- I believe the webcam light will turn off on Mac devices, but perhaps not on Windows etc.
.stop(), on the other hand, completely stops the track and tells the source it is no longer needed. If the source is only connected to this track, the source itself will completely stop. Calling .stop() will definitely turn off the camera and webcam light, but you won't be able to turn it back on in your stream instance (since its video track was destroyed). Therefore, completely turning off the camera is not what you want to do; just stick to .enabled = false to temporarily disable video and .enabled = true to turn it back on.
We need to assign window.localStream = stream; inside navigator.mediaDevices.getUserMedia method
For Stop Webcam and LED light off.
localStream.getVideoTracks()[0].stop();
video.src = '';
takePicture = async function() {
if (this.camera) {
const options = { quality: 0.5, base64: true, pauseAfterCapture: true };
const data = await this.camera.takePictureAsync(options);
this.setState({path: data.uri});
}
}
takePicture is my function to click the image. When I don't use pauseAfterCapture in options, then it takes 3 seconds for the image to get captured, while camera is still active in those 3 seconds. And when I use pauseAfterCapture it takes me around 1.5 seconds to capture image with the camera being active during those 1.5 seconds.
I've also used skipProcessing which helps me with fast capture but I don't want to lose other information like base64, width, quality, mirrorImage, exif, etc like mentioned on react-native-camera github page.
Is this has something to do with takePictureAsync taking time to resolve? If yes, then how do I manage it?
Also, if this question has no solution, how can I use ActivityIndicator when the image is getting captured.
P.S. - I know this question has been asked a lot but I'm unable to find any solution for this. I hope we all can come up with a solution so that it can help other people in future.
I'm creating a PDF output tool using jsPDF but need to add multiple pages, each holding a canvas image of a video frame.
I am stuck on the logic as to the best way to achieve this as I can't reconcile how to queue the operations and wait on events to achieve the best result.
To start I have a video loaded into a video tag and can get or set its seek point simply with:
video.currentTime
I also have an array of video seconds like the following:
var vidSecs = [1,9,13,25,63];
What I need to do is loop through this array, seek in the video to the seconds defined in the array, create a canvas at these seconds and then add each canvas to a PDF page.
I have a create canvas from video frame function as follows:
function capture_frame(video_ctrl, width, height){
if(width == null){
width = video_ctrl.videoWidth;
}
if(height == null){
height = video_ctrl.videoHeight;
}
canvas = document.createElement('canvas');
canvas.width = width;
canvas.height = height;
var ctx = canvas.getContext('2d');
ctx.drawImage(video_ctrl, 0, 0, width, height);
return canvas;
}
This function works fine in conjunction with the following to add an image to the PDF:
function addPdfImage(pdfObj, videoObj){
pdfObj.addPage();
pdfObj.text("Image at time point X:", 10, 20);
var vidImage = capture_frame(videoObj, null, null);
var dataURLWidth = 0;
var dataURLHeight = 0;
if(videoObj.videoWidth > pdfObj.internal.pageSize.width){
dataURLWidth = pdfObj.internal.pageSize.width;
dataURLHeight = (pdfObj.internal.pageSize.width/videoObj.videoWidth) * videoObj.videoHeight;
}else{
dataURLWidth = videoObj.videoWidth;
dataURLHeight = videoObj.videoHeight;
}
pdfObj.addImage(vidImage.toDataURL('image/jpg'), 'JPEG', 10, 50, dataURLWidth, dataURLHeight);
}
My logic confusion is how best to call these bits of code while looping through the vidSecs array as the problem is that setting the video.currentTime needs the loop to wait for the video.onseeked event to fire before code to capture the frame and add it to the PDF can be run.
I've tried the following but only get the last image as the loop has completed before the onseeked event fires and calls the frame capture code.
for(var i = 0; i < vidSecs.length; i++){
video.currentTime = vidSecs[i];
video.onseeked = function() {
addPdfImage(jsPDF_Variable, video);
};
}
Any thoughts much appreciated.
This is not a real answer but a comment, since I develop alike application and got no solution.
I am trying to extract viddeo frames from webcam live video stream and save as canvas/context, updated every 1 - 5 sec.
How to loop HTML5 webcam video + snap photo with delay and photo refresh?
I have created 2 canvases to be populated by setTimeout (5000) event and on test run I don't get 5 sec delay between canvas/contextes, sometimes, 2 5 sec. delayed contextes get populated with image at the same time.
So I am trying to implement
Draw HTML5 Video onto Canvas - Google Chrome Crash, Aw Snap
var toggle = true;
function loop() {
toggle = !toggle;
if (toggle) {
if (!v.paused) requestAnimationFrame(loop);
return;
}
/// draw video frame every 1/30 frame
ctx.drawImage(v, 0, 0);
/// loop if video is playing
if (!v.paused) requestAnimationFrame(loop);
}
to replace setInterval/setTimeout to get video and video frames properly synced
"
Use requestAnimationFrame (rAF) instead. The 20ms makes no sense. Most video runs at 30 FPS in the US (NTSC system) and at 25 FPS in Europe (PAL system). That would be 33.3ms and 40ms respectively.
"
I am afraid HTML5 provided no quality support for synced real time live video processing via canvas/ context, since HTML5 offers no proper timing since was intended to be event/s controlled and not run as real time run app code ( C, C++ ...).
My 100+ queries via search engine resulted in not a single HTML5 app I intend to develop.
What worked for me was Snap Photo from webcam video input, Click button event controlled.
If I am wrong, please correct me.
Two approaches:
create a new video element for every seek event, code provided by Chris West
reuse the video element via async/await, code provided by Adrian Wong
I've built this HTML5 video player that I am loading into a canvas to manipulate and back onto a canvas to display it. The video starts out quite slow and the frame rate only gets worse each time it is played. All I am currently manipulating in the video now is the color value when the video is paused, but will eventually be using real time manipulation throughout videos that will be posted in the future.
I used the below tutorial to learn this trick https://www.youtube.com/watch?v=zjQzP3mOXdc
Here is the relevant code, but there may possibly be interference coming from elsewhere so feel free to check the source code at the link at the bottom
var v = document.getElementById('video');
var color = "#DA7AC1";
var processes={
timerCallback:function() {
if (this.v2.paused || this.v2.ended) {
return;
}
this.ctxIn.drawImage(this.v2,0,0,this.width,this.height);
this.pixelScan();
var self=this;
setTimeout(function() {
self.timerCallback();
}, 0);
},
doLoad:function(){
this.v2=document.getElementById("video");
this.cIn=document.getElementById("cIn");
this.ctxIn=this.cIn.getContext("2d");
this.cOut=document.getElementById("cOut");
this.ctxOut=this.cOut.getContext("2d");
var self=this;
this.v2.addEventListener("playing", function() {
self.width=self.v2.videoWidth;
self.height=self.v2.videoHeight;
cIn.width=self.v2.videoWidth;
cIn.height=self.v2.videoHeight;
cOut.width=self.v2.videoWidth;
cOut.height=self.v2.videoHeight;
self.timerCallback();
}, false);
},
pixelScan: function() {
var frame = this.ctxIn.getImageData(0,0,this.width,this.height);
for(var i=0; i<frame.data.length;i+=4) {
var grayscale=frame.data[i]*.3+frame.data[i+1]*.59+frame.data[i+2]*.11;
frame.data[i]=grayscale;
frame.data[i+1]=grayscale;
frame.data[i+2]=grayscale;
}
this.ctxOut.putImageData(frame,0,0);
return;
}
}
http://coreytegeler.com/ethan/
Any and all help would be greatly appreciated! Thanks!
Reason 1
Try to adjust your timer avoiding 0 as timeout value:
setTimeout(function() {
self.timerCallback();
}, 34);
34ms is plenty as video frame rate is typically never more than 30 FPS (NTSC) or 25 FPS (PAL), ie 1000 / 30. If you use 0 you risk stacking up your calls which means the browser will be busy trying to empty the event queue.
If you use anything lower than 33-34ms you end up having the same frame processed twice or more which of course is unnecessary (your video is actually 29.97 FPS/NTSC so you might want to consider keeping 34ms).
Reason 2
The video resolution is also full HD (1920x1080) which is a bit too much for canvas and JS to process in real-time (for a typcial consumer computer). Try to reduce the video size so a normal spec'ed computer will be able to process the data.
Reason 3 (in part)
You don't need two on-screen canvases or even an on-screen video. Try to create these tags dynamically and not inserting them into the DOM. Use a single canvas on-screen and draw the result to that (you can putImageData from one canvas to another).
Reason 4 (in part)
Ideally, replace setTimeout with a requestAnimationFrame approach as this improves the synchronization and efficiency considerably. You can implement a toggle to reduce the FPS to for example 30 as you don't need to process each frame twice (ref. 30 FPS video frame rate).
Update
To create these elements dynamically (ref reason 3) you can do something like this:
var canvas = document.createElement('canvas'),
video = document.createElement('video'),
ctx = canvas.getContext('2d');
video.preload = 'auto';
video.addEventListener('canplay', start, false);
if (video.canPlayType('video/mp4')) {
video.src = 'videoUrl.mp4';
} else if ...etc.
Then when the video has loaded enough data (on metadata or canplay) you set the off-screen (and on-screen) canvas element to the size of the video:
canvas.width = video.videoWidth;
canvas.height = video.videoHeight;
Then when playing process its buffer and copy to the on-screen canvas you defined before.
You don't have have an off-screen canvas - I merely mention this as you in your original code used and in and out canvas IIRC. You can simply use a single on-screen canvas and the off-screen video and draw to the video frame to the canvas, process it and put back the processed data. Should work fine too in this case.
I ran a profile in chrome and it points to line 46 as taking up the most CPU.
setTimeout(function() {
self.timerCallback();
}, 0);
Perhaps increasing the timeout will stop it from lagging.
I had the same issues and tried a number of fixes. I was using Premier Elements which didn't export to mp4 and using HandBrake to convert the format. I also Tried FFMpeg to do the conversion, but neither worked.
What I did was switch to Kdenlive as my video editor, it exported directly to MP4, and that video worked perfectly.
So, if you are have this slow render issue, it is probably an issues with the video encoding. Easiest fix is to get a high quality video editor like Premier Pro, Final Cut, or Kdenlive. Kdenlive is free but it has a huge learning curve and poor public documentation.