How to completely turn off camera on mediastream javascript - javascript

im trying to create a video conference web app..
the problem is im trying to disable my camera on the middle conference, its work but my laptop camera indicator still on (the light is on) but on my web, video show blank screen, is that normal or i miss something?
here what i try
videoAction() {
navigator.mediaDevices.getUserMedia({
video: true,
audio: true
}).then(stream => {
this.myStream = stream
})
this.myStream.getVideoTracks()[0].enabled = !(this.myStream.getVideoTracks()[0].enabled)
this.mediaStatus.video = this.myStream.getVideoTracks()[0].enabled
}

There is also a stop() method which should do the trick in Chrome and Safari. Firefox should already mark the camera as unused by setting the enabled property.
this.myStream.getVideoTracks()[0].stop();

Firstly, MediaStreamTrack.enabled is a Boolean, so you can simply assign the value false.
To simplify your code, you might call:
var vidTrack = myStream.getVideoTracks();
vidTrack.forEach(track => track.enabled = false);
When MediaStreamTrack.enabled = false, the track passes empty frames to the stream, which is why black video is sent. The camera/source itself is not stopped-- I believe the webcam light will turn off on Mac devices, but perhaps not on Windows etc.
.stop(), on the other hand, completely stops the track and tells the source it is no longer needed. If the source is only connected to this track, the source itself will completely stop. Calling .stop() will definitely turn off the camera and webcam light, but you won't be able to turn it back on in your stream instance (since its video track was destroyed). Therefore, completely turning off the camera is not what you want to do; just stick to .enabled = false to temporarily disable video and .enabled = true to turn it back on.

We need to assign window.localStream = stream; inside navigator.mediaDevices.getUserMedia method
For Stop Webcam and LED light off.
localStream.getVideoTracks()[0].stop();
video.src = '';

Related

Safari - A MediaStreamTrack ended due to a capture failure

Today I upgraded to macOS Big Sur 11.0.1 and Safari 14, and my website (one-to-one video chat based on WebRTC) stopped working on Safari. After 10 seconds of a video call, the following console error appears: "A MediaStreamTrack ended due to a capture failure" and the other person can no longer see the video.
My code looks like this:
const userMedia = await navigator.mediaDevices.getUserMedia({
video: true,
audio: true,
});
if (userMedia != null) {
userMedia.getTracks().forEach((track) => {
otherRtcPeer.addTrack(track, userMedia);
});
}
Is it a Safari bug or an implementation issue? And how to solve it?
After going through this guide, I have made changes and resolved the issue.
Have stream object in react state
when video element render/re-render the stream object is cloned and assigned to video element srcObject
After capture of the picture, stop all media tracks in the stream.
This way error mentioned above has been overcome.
I was able to fix the issue on my end by styling a video element I'm using as a webGL texture as display:block, opacity:0 (instead of display: none).
Perhaps they removed the ability to play offscreen video textures on ios14/big sur.

Web Audio API: Why can you only start sources once?

Suppose you use the Web Audio API to play a pure tone:
ctx = new AudioContext();
src = ctx.createOscillator();
src.frequency = 261.63; //Play middle C
src.connect(ctx.destination);
src.start();
But, later on you decide you want to stop the sound:
src.stop();
From this point on, src is now completely useless; if you try to start it again, you get:
src.start()
VM564:1 Uncaught DOMException: Failed to execute 'start' on 'AudioScheduledSourceNode': cannot call start more than once.
at <anonymous>:1:5
If you were making, say, a little online keyboard, you're constantly turning notes on and off. It seems really clunky to remove the old object from the audio nodes graph, create a brand new object, and connect() it into the graph, (and then discard the object later) when it would be simpler to just turn it on and off when needed.
Is there some important reason the Web Audio API does things like this? Or is there some cleaner way of restarting an audio source?
Use connect() and disconnect(). You can then change the values of any AudioNode to change the sound.
(The button is because AudioContext requires a user action to run in Snippet.)
play = () => {
d.addEventListener('mouseover',()=>src.connect(ctx.destination));
d.addEventListener('mouseout',()=>src.disconnect(ctx.destination));
ctx = new AudioContext();
src = ctx.createOscillator();
src.frequency = 261.63; //Play middle C
src.start();
}
div {
height:32px;
width:32px;
background-color:red
}
div:hover {
background-color:green
}
<button onclick='play();this.disabled=true;'>play</button>
<div id='d'></div>
This is exactly how the web audio api works. Sound generator nodes like oscillator nodes and audio buffer source nodes are intended to be used once. Every time you want to play your oscillator, you have to create it and set it up, just like you said. I know it seems like a hassle, but you can abstract it into a play() method that handles those details for you, so you don't have to think about it every time you play an oscillator. Also, don't worry about the performance implications of creating so many nodes. The web audio api is intended to be used this way.
If you just want to make music on the internet, and you're not as interested in learning the ins and outs of the web audio api, you might be interested in using a library I wrote that makes things like this easier: https://github.com/rserota/wad
I am working on a 12 Voice Polyphonic Syntesizer with 2 Osc per Voice.
I now never Stop the Osc's. I disconnect the Osc's. You can do that by setTimeout. For the Time take the longest release Phase (1 of 2) from the amp Enveloop for this Set of Osc's. Subtract the AudioContext.currentTime(), multiply with 1000 (setTimeout works with milisecs, web Audio works with seconds.)

VideoTexture is not playing in VR

At first the texture works fine and the video plays as expected, but when VR is entered via VRDisplay.requestPresent it stops. Why is this and how to fix it?
The VR display has its own render loop. Usually needsUpdate is automatically set to true on every animation frame by three.js, but this is only true for the default display.
To fix this, get the VR display from the vrdisplayconnect event and create your own update loop. E.g.
let display = e.display;
let displayUpdateLoop = () =>
{
// May get a warning if getFrameData is not called.
let frameData = new VRFrameData();
display.getFrameData(frameData);
videoTexture.needsUpdate = true;
// Stop loop if no longer presenting.
if (display.isPresenting)
display.requestAnimationFrame(displayUpdateLoop);
}
display.requestAnimationFrame(displayUpdateLoop);

oddity about createMediaElementSource()

I was building an audio program and hit a stumbling block on the .createMediaElementSource method. I was able to solve the problem, but I do not quite know why the solution works.
In my HTML, I created an audio player: <audio id="myAudio><source src="music.mp3"></audio>
Now in my JS:
context = new AudioContext();
audio = document.getElementById('myAudio');
source = context.createMediaElementSource(audio);
audio.play();
doesn't work. The audio element loads, but doesn't play the song, nor is there audio.
However! This JS code works:
context = ...; //same as above
audio...;
source = context.createMediaElementSource(audio[0]);
audio.play();
All I changed was adding a [0] to the audio and the program suddenly works again. Since .getElementById doesn't return an array, I don't know why referring to audio as an array works, but just referring to audio does not.
A few months late, but in case others stumble upon this and want an answer:
This behaviour is described in the Web Audio API spec:
The createMediaElementSource method
Creates a MediaElementAudioSourceNode given an HTMLMediaElement. As a consequence of calling this method, audio playback from the HTMLMediaElement will be re-routed into the processing graph of the AudioContext.
Emphasis mine. Since the output from the audio element is now routed into the newly created MediaElementAudioSourceNode instance (instead of the original destination, usually your speakers), you need to route the output of the instance back to the original destination:
var audio = document.getElementById('myAudio');
var ctx = new AudioContext();
var src = ctx.createMediaElementSource(audio);
src.connect(ctx.destination); // connect the output of the source to your speakers
audio.play();
The reason it worked when you added [0] is that document.getElementById doesn't return an array, or an element with a defined key of "0". As such, you might as well have written ctx.createMediaElementSource(undefined), which doesn't re-route the audio from the #myAudio element.

HTML5 video to canvas playing very slow

I've built this HTML5 video player that I am loading into a canvas to manipulate and back onto a canvas to display it. The video starts out quite slow and the frame rate only gets worse each time it is played. All I am currently manipulating in the video now is the color value when the video is paused, but will eventually be using real time manipulation throughout videos that will be posted in the future.
I used the below tutorial to learn this trick https://www.youtube.com/watch?v=zjQzP3mOXdc
Here is the relevant code, but there may possibly be interference coming from elsewhere so feel free to check the source code at the link at the bottom
var v = document.getElementById('video');
var color = "#DA7AC1";
var processes={
timerCallback:function() {
if (this.v2.paused || this.v2.ended) {
return;
}
this.ctxIn.drawImage(this.v2,0,0,this.width,this.height);
this.pixelScan();
var self=this;
setTimeout(function() {
self.timerCallback();
}, 0);
},
doLoad:function(){
this.v2=document.getElementById("video");
this.cIn=document.getElementById("cIn");
this.ctxIn=this.cIn.getContext("2d");
this.cOut=document.getElementById("cOut");
this.ctxOut=this.cOut.getContext("2d");
var self=this;
this.v2.addEventListener("playing", function() {
self.width=self.v2.videoWidth;
self.height=self.v2.videoHeight;
cIn.width=self.v2.videoWidth;
cIn.height=self.v2.videoHeight;
cOut.width=self.v2.videoWidth;
cOut.height=self.v2.videoHeight;
self.timerCallback();
}, false);
},
pixelScan: function() {
var frame = this.ctxIn.getImageData(0,0,this.width,this.height);
for(var i=0; i<frame.data.length;i+=4) {
var grayscale=frame.data[i]*.3+frame.data[i+1]*.59+frame.data[i+2]*.11;
frame.data[i]=grayscale;
frame.data[i+1]=grayscale;
frame.data[i+2]=grayscale;
}
this.ctxOut.putImageData(frame,0,0);
return;
}
}
http://coreytegeler.com/ethan/
Any and all help would be greatly appreciated! Thanks!
Reason 1
Try to adjust your timer avoiding 0 as timeout value:
setTimeout(function() {
self.timerCallback();
}, 34);
34ms is plenty as video frame rate is typically never more than 30 FPS (NTSC) or 25 FPS (PAL), ie 1000 / 30. If you use 0 you risk stacking up your calls which means the browser will be busy trying to empty the event queue.
If you use anything lower than 33-34ms you end up having the same frame processed twice or more which of course is unnecessary (your video is actually 29.97 FPS/NTSC so you might want to consider keeping 34ms).
Reason 2
The video resolution is also full HD (1920x1080) which is a bit too much for canvas and JS to process in real-time (for a typcial consumer computer). Try to reduce the video size so a normal spec'ed computer will be able to process the data.
Reason 3 (in part)
You don't need two on-screen canvases or even an on-screen video. Try to create these tags dynamically and not inserting them into the DOM. Use a single canvas on-screen and draw the result to that (you can putImageData from one canvas to another).
Reason 4 (in part)
Ideally, replace setTimeout with a requestAnimationFrame approach as this improves the synchronization and efficiency considerably. You can implement a toggle to reduce the FPS to for example 30 as you don't need to process each frame twice (ref. 30 FPS video frame rate).
Update
To create these elements dynamically (ref reason 3) you can do something like this:
var canvas = document.createElement('canvas'),
video = document.createElement('video'),
ctx = canvas.getContext('2d');
video.preload = 'auto';
video.addEventListener('canplay', start, false);
if (video.canPlayType('video/mp4')) {
video.src = 'videoUrl.mp4';
} else if ...etc.
Then when the video has loaded enough data (on metadata or canplay) you set the off-screen (and on-screen) canvas element to the size of the video:
canvas.width = video.videoWidth;
canvas.height = video.videoHeight;
Then when playing process its buffer and copy to the on-screen canvas you defined before.
You don't have have an off-screen canvas - I merely mention this as you in your original code used and in and out canvas IIRC. You can simply use a single on-screen canvas and the off-screen video and draw to the video frame to the canvas, process it and put back the processed data. Should work fine too in this case.
I ran a profile in chrome and it points to line 46 as taking up the most CPU.
setTimeout(function() {
self.timerCallback();
}, 0);
Perhaps increasing the timeout will stop it from lagging.
I had the same issues and tried a number of fixes. I was using Premier Elements which didn't export to mp4 and using HandBrake to convert the format. I also Tried FFMpeg to do the conversion, but neither worked.
What I did was switch to Kdenlive as my video editor, it exported directly to MP4, and that video worked perfectly.
So, if you are have this slow render issue, it is probably an issues with the video encoding. Easiest fix is to get a high quality video editor like Premier Pro, Final Cut, or Kdenlive. Kdenlive is free but it has a huge learning curve and poor public documentation.

Categories