Applying constraints to an audio track from getUserMedia - javascript

Is it possible to apply constraints to a running live audio track? It doesn't seem to work for me, at least on Chrome v80.
Suppose I have a stream:
const stream = await navigator.mediaDevices.getUserMedia({
audio: {
autoGainControl: true
channelCount: 2
echoCancellation: true
noiseSuppression: true
},
video: false
});
Now, later on I want to change some of those parameters:
for (const track of stream.getAudioTracks()) {
track.applyConstraints({
autoGainControl: false,
echoCancellation: false,
noiseSuppression: false
});
}
This has no effect. If I call track.getConstraints(), I see my new constraints but audibly they have no effect until I reload the page and apply them from the beginning. Additionally, when I call track.getSettings(), I see that my new constraints haven't been applied.
I have also tried calling track.enabled = false before applying the constraints, with track.enabled = true afterwards, with no luck.
Any advice on how to get this to work without making a fresh call to getUserMedia()?

SO user jib, which works on Firefox and adapter.js projects wrote a blog post in 2017 about this exact feature.
Here is how they did apply the constraints to the track:
async function apply(c) {
await track.applyConstraints(Object.assign(track.getSettings(), c));
update();
}
c is an object with the particular constraints to add.
They do it this way because all the properties that are omitted when passing the
MediaTrackConstraints dictionary will get reset to their defaults when applied.
Now, your solution should have worked too for the properties you did set.
So using this fiddle I did try the few UAs I have on my macOS machine:
Chrome
As you reported, settings are not applied.
Here is the issue tracking the upcoming implementation.
From this issue's comments you can also find a workaround which implies requesting a new MediaStream from the same deviceId as the one you got, and applying the desired constraints.
Here is a fork of jib's fiddle with such a workaround. Note that the deviceId is gotten from track.getSettings()
async function apply(c) {
track.stop(); // required
const new_constraints = Object.assign(track.getSettings(), c, );
const new_stream = await gUM({ audio: new_constraints });
updateSpectrum( audio.srcObject = new_stream );
track = new_stream.getAudioTracks()[0];
update();
}
Firefox
Works seamlessly.
Safari
Badly crashes. On my machine, running the original fiddle, with only the tweak of the spectrum will completely crash the gUM of the whole browser.
- The current stream is stopped
- Any attempt to get a new stream fails - until reboot of the whole app.
The forked fiddle we made for Chrome at least doesn't crash, but it doesn't seem to produce any audible change either...

Related

How to do screen sharing with simple-peer webRTC SDK

I'm trying to implement webrtc & simple peer to my chat. Everything works but I would like to add screen sharing option. For that I tried that:
$("#callScreenShare").click(async function(){
if(captureStream != null){
p.removeStream(captureStream)
p.addStream(videoStream)
captureStreamTrack.stop()
captureStreamTrack =captureStream= null
$("#callVideo")[0].srcObject = videoStream
$(this).text("screen_share")
}else{
captureStream = await navigator.mediaDevices.getDisplayMedia({video:true, audio:true})
captureStreamTrack = captureStream.getTracks()[0]
$("#callVideo")[0].srcObject = captureStream
p.removeStream(videoStream)
console.log(p)
p.addStream(captureStream)
$(this).text("stop_screen_share")
}
})
But I stop the camera and after that doesn't do anything and my video stream on my peer's computer is blocked. No errors, nothing only that.
I've put a console.log when the event stream is fired. The first time it fires but when I call the addStream method, it doesn't
If someone could help me it would be really helpful.
What I do is replacing the track. So instead of removing and adding the stream:
p.streams[0].getVideoTracks()[0].stop()
p.streams[0].replaceTrack(p.streams[0].getVideoTracks()[0], captureStreamTrack, p.streams[0])
This will replace the video track from the stream with the one of the display.
simple-peer docs
The below function will do the trick. Simply call the replaceTrack function, passing it the new track and the remote peer instance.
function replaceTrack(stream, recipientPeer ) {
recipientPeer.replaceTrack(
recipientPeer.streams[0].getVideoTracks()[0],
stream,
recipientPeer.streams[0]
)
}

How to use RTCPeerConnection.removeTrack() to remove video or audio or both?

I'm studying WebRTC and try to figure how it works.
I modified this sample on WebRTC.github.io to make getUserMedia source of leftVideo and streaming to rightVideo.It works.
And I want to add some feature, like when I press pause on leftVideo(My browser is Chrome 69)
I change apart of Call()
...
stream.getTracks().forEach(track => {
pc1Senders.push(pc1.addTrack(track, stream));
});
...
And add function on leftVideo
leftVideo.onpause = () => {
pc1Senders.map(sender => pc1.removeTrack(sender));
}
I don't want to close the connection, I just want to turn off only video or audio.
But after I pause leftVideo, the rightVideo still gets track.
Am I doing wrong here, or maybe other place?
Thanks for your helping.
First, you need to get the stream of the peer. You can mute/hide the stream using the enabled attribute of the MediaStreamTrack. Use the below code snippet toggle media.
/* stream: MediaStream, type:trackType('audio'/'video') */
toggleTrack(stream,type) {
stream.getTracks().forEach((track) => {
if (track.kind === type) {
track.enabled = !track.enabled;
}
});
}
const senders = pc.getSenders();
senders.forEach((sender) => pc.removeTrack(sender));
newTracks.forEach((tr) => pc.addTrack(tr));
Get all the senders;
Loop Through and remove each sending track;
Add new tracks (if so desired);
Edit: or, if you won't need renegotiation (conditions listed below), use replaceTrack (https://developer.mozilla.org/en-US/docs/Web/API/RTCRtpSender/replaceTrack).
Not all track replacements require renegotiation. In fact, even
changes that seem huge can be done without requiring negotation. Here
are the changes that can trigger negotiaton:
The new track has a resolution which is outside the bounds of the
bounds of the current track; that is, the new track is either wider or
taller than the current one.
The new track's frame rate is high enough
to cause the codec's block rate to be exceeded. The new track is a
video track and its raw or pre-encoded state differs from that of the
original track.
The new track is an audio track with a different
number of channels from the original.
Media sources that have built-in
encoders — such as hardware encoders — may not be able to provide the
negotiated codec. Software sources may not implement the negotiated
codec.
async switchMicrophone(on) {
if (on) {
console.log("Turning on microphone");
const stream = await navigator.mediaDevices.getUserMedia({audio: true});
this.localAudioTrack = stream.getAudioTracks()[0];
const audioSender = this.peerConnection.getSenders().find(e => e.track?.kind === 'audio');
if (audioSender == null) {
console.log("Initiating audio sender");
this.peerConnection.addTrack(this.localAudioTrack); // will create sender, streamless track must be handled on another side here
} else {
console.log("Updating audio sender");
await audioSender.replaceTrack(this.localAudioTrack); // replaceTrack will do it gently, no new negotiation will be triggered
}
} else {
console.log("Turning off microphone");
this.localAudioTrack.stop(); // this will turn off mic and make sure you don't have active air-on indicator
}
}
This is simplified code. Solves most of the issues described in this topic.

WebRTC continue video stream when webcam is reconnected

I've got simple video stream working via getUserMedia, but I would like to handle case when webCam what i'm streaming from becomes disconnected or unavailable. So I've found oninactive event on stream object passed to successCallback function. Also I would like to restart video stream when exactly same webcam/mediaDevice will be plugged in.
Code example:
navigator.getUserMedia = navigator.getUserMedia || navigator.webkitGetUserMedia || navigator.mozGetUserMedia;
navigator.getUserMedia(constrains, function successCallback(stream) {
this.video.src = URL.createObjectURL(stream);
stream.oninactive = function (error) {
//this handler runs when device becomes unavailable.
this.onStreamInactive(error, stream);
}.bind(this);
}.bind(this), function errorCallback () {});
Based on the example above how i can:
Detect recently connected media device
Check is it the same device what I was streaming from
A better way would be to use MediaDevices.ondevicechange() as mentioned in the other answer in this thread, but it is still behind a flag on Chrome. Instead of using ondevicechange() to enumerate devices, poll MediaDevices.enumerateDevices() at regular interval when you start the call, at end of every poll interval compare the list of devices you get from the devices in the previous poll. This way you can know the new devices added/remove during the call.
A little late to answer, but it looks like you can use MediaDevices.ondevicechange to attach an event handler, and then in the event handler you can query MediaDevices.enumerateDevices() to get the full list. Then you inspect the list of devices, identify the one that was recently added by comparing by a cached list you have, and comparing properties to a record you kept of the properties of the current device. The links have more thorough examples.
Adapted from the ondevicechange reference page
navigator.mediaDevices.ondevicechange = function(event) {
navigator.mediaDevices.enumerateDevices()
.then(function(devices) {
devices.forEach(function(device) {
console.log(device);
// check if this is the device that was disconnected
});
});
}
Note that the type of the device objects returned by enumerateDevices is described here
Browser Support
It looks like it's pretty patchy as of writing this. See this related question: Audio devices plugin and plugout event on chrome browser for further discussion, but the short story is for Chrome you'll need to enable the "Experimental Web Platform features" flag.

oscilloscope of speaker input stops rendering after a few seconds

The following script reads the audio from the user's microphone and renders an oscilloscope on a html canvas.
The source is taken from an example of the mozilla developer network: Visualizations with Web Audio API
And here is the fiddle: http://jsfiddle.net/b7j8pktp/
mozGetUserMedia
(note: code has no fork mechanism for different browsers: works only with firefox)
It works fine for a few seconds and then immediately stops rendering.
Whereas this works totally stable: http://mdn.github.io/voice-change-o-matic/
The problem can be reduced to the following code. The microphone activation icon (next to the the address bar in firefox) disappears after about 5 seconds:
navigator.mozGetUserMedia({audio: true},
function() {}, function() {} );
(http://jsfiddle.net/b7j8pktp/2/)
This is a known bug in Firefox. Just take the stream from the getUserMedia call and hook it up to the window like so:
navigator.mozGetUserMedia({audio: true}, function(stream) {
window.stream = stream;
// rest of the code
}, function err() {
// handle error
});
Hopefully we can get it fixed soon. The problem is that we're failing to add a reference to the stream when we do the AudioContext.createMediaStreamSource call, so that the stream is not referenced by anything anymore when the getUserMedia callback returns, and it is collected by the cycle collector when it runs, that is, a couple seconds later.
You can follow along in https://bugzilla.mozilla.org/show_bug.cgi?id=934512.

Get audio duration on Chrome for Android

I'm getting the audio/video duration of a file without appending it to the screen. "Using the same code", when I try to get the video duration on both sides it works as expected. But when using audio files it says that the duration is 0 on Android, but it works on a desktop computer.
// Only working on Desktop
var audio = new Audio(url);
// Hide audio player
// player.appendChild(audio);
audio.addEventListener('loadedmetadata', function() {
alert(audio.duration);
});
And the below code is working:
// Working on Desktop and Android
var video = document.createElement('video');
video.src = url;
// Hide video
// player.appendChild(video);
video.addEventListener('loadedmetadata', function() {
alert(video.duration);
});
There is a different approach you can try but, if duration doesn't work with your device (which IMO is a bug) then it's likely this doesn't either; worth a shot though:
audio.seekable.end(audio.seekable.length-1);
or even
audio.buffered.end(audio.buffered.length-1);
though the latter is dependent on content being loaded which in this case probably then won't help.
EDIT: Using the durationchange event is much easier. First the 0 is being output, but as soon as the file is loaded (that's where loadedmetadata fails I guess) the updated and real duration will be output.
audio.addEventListener('durationchange', function(e) {
console.log(e.target.duration); //FIRST 0, THEN REAL DURATION
});
OLD WAY (ABOVE IS MUCH FASTER)
Looks like this "bug" (if this is actually a real bug) is still around. Chrome (40) for Android still outputs 0 as the audio files duration. Researching the web didn't get me a solution but I found out the bug also occurs on iOS. I figured I should post my fix here for you guys.
While audio.duration outputs 0, logging audio outputs the object and you can see that the duration is displayed just right there. All this is happening in the loadedmetadata event.
audio.addEventListener('loadedmetadata', function(e) {
console.log(e.target.duration); //0
});
If you log audio.duration in the timeupdate event though, the real duration is being output. To only output it once you could do something like:
var fix = true;
audio.addEventListener('timeupdate', function(e) {
if(fix === true) {
console.log(e.target.duration); //REAL DURATION
fix = false;
}
console.log(e.target.currentTime); //UPDATED TIME POSITION
});
I'm not sure why all this is happening. But let's be happy it's nothing serious.

Categories