I am trying to add a video track to the user's stream object so that the peers can see it. In componentDidMount() I initially get the permission to use the microphone, but I have a button that I would like to use to add a video track.
I have a mute/unmute button, that toggles the audio, that works just fine, but when I try to add a video track the same way I can't get it to arrive to the peers.
This is the code I use to get access to the microphone only:
getAudio(callback, err) {
const options = { video: false, audio: true };
if (navigator.mediaDevices && navigator.mediaDevices.getUserMedia) {
return navigator.mediaDevices.getUserMedia(options)
.then(stream => callback(stream))
.catch(e => err(e));
}
return navigator.getUserMedia(options, callback, err);
}
I call this in componentDidMount() like so:
this.getAudio(this.onAudio, err => {
this.setState({
mediaErr: 'Could not access webcam'
});
console.log('getMedia error', err);
});
The onAudio() creates the peers, since it runs on mount. I have a button I use to mute/unmute the audio like this:
toggleMicrophone(){
const audioTrack = this.stream.getAudioTracks()[0];
audioTrack.enabled = !audioTrack.enabled;
this.setState({
microphoneEnabled: audioTrack.enabled
});
}
This works fine, so I tried to add the video track pretty much the same way. I have a button that calls the getVideo(), that is identical to the getAudio(), except in the options, audio and video are both set to true. getVideo() calls the onVideo(), passing it the stream it gets from getUserMedia().
The onAudio() function:
onVideo(stream){
this.stream.addTrack(stream.getVideoTracks()[0]);
}
Since the mute button worked just by disabling the audio track, I thought I could just add the video track here and the peers would see the video stream, but it doesn't work that way.
The video track appears for the user that pressed the button, but not for the peers.
What am I missing?
Related
I implemented web push notifications. Notification is coming but I want to play custom notification sound that I added but that sound is not working default system window sound is coming I want to play this sound. I added in code to let me know why this notification sound is not playing recive
self.addEventListener('push', async function (event) {
const data = event.data.json();
console.log(data);
const title = 'Sound Notification';
const options = {
sound: '../public/messageNotification.mp3',
};
try {
registration.showNotification(title, options);
} catch (e) {
registration.showNotification(title, options);
}
});
I think you can use HTMLAudioElement for his purpose. For example:
let sound: HTMLAudioElement;
sound = new Audio();
sound.src = '../public/messageNotification.mp3';
sound.load();
sound.play();
We need to take a look what registration.showNotification does, but if it is working correctly and the sound is still not playing, it might be because modern browsers block autoplay in some situations.
For example, Chrome has this autoplay policy. Firefox and Safari might have slightly different policies.
In these cases, you might need to find workarounds for each browser, or you can instruct users to always enable autoplay for your site. In Chrome 104, you do this by clicking on the lock icon (next to the URL), select Site settings, then under Sound select Allow
const song = new Audio("url");
song.play(); // to play the audio
Iam making a web app using webrtc that allows two users to communicate with each other using both video and audio. The app uses node.js as signaling server. The app works fine when communicating between two desktops but when I try a desktop to mobile communication, if the user initiating the offer is the one in the desktop, the one on mobile can't hear any sound. If it happens the other way around, both have audio. When I check the devtools the audio stream is sent from the desktop and is received by the mobile (it is active and not muted) but there is no sound. I use the audio element to play the audio stream and the video element to play the video stream. I have tested this on both chrome and mozilla and i encounter the same problem.
If anyone can help it would be greatly appreciated.
Bellow are code samples of the ontrack event
rtcConnection.ontrack = function(event) {
console.log('Remote stream received.');
if(event.streams[0].getAudioTracks().length > 0) {
event.streams[0].getAudioTracks().forEach((track) => {
remoteAudioStream .addTrack(track);
});
audioPlayer.srcObject = remoteAudioStream;
}
if (event.streams[0].getVideoTracks().length > 0){
event.streams[0].getVideoTracks().forEach((track) => {
remoteVideoStream .addTrack(track);
});
localVideo.srcObject = remoteVideoStream;
}
};
and the capture media stream:
function getUserMedia() {
let getAudio = true;
let getVideo = true;
let constraints = { audio: getAudio, video: getVideo };
navigator.mediaDevices.getUserMedia(constraints) // Ask user to allow access to his media devices
.then(
function(data) { //if yes, get stream config data and join room
localStream = data;
console.log('Getting user media succeeded.');
console.log('RTC Connection created. Getting user media. Adding stream tracks to RTC connection');
sendMessage({ type: 'peermessage', messagetype:'info', messagetext: 'Peer started video streaming.'});
//stream to be sent to the other user
localStream.getTracks().forEach(track => rtcConnection.addTrack(track, localStream));
console.log('Creating offer');
rtcConnection.createOffer()
.then(function(offer) { // createOffer success
console.log('Offer created. Setting it as local description');
return rtcConnection.setLocalDescription(offer);
}, logError) // createOffer error
.then(function() { // setLocalDescription success
console.log('Offer set as local description. Sending it to agent');
sendMessage(rtcConnection.localDescription)
}, logError); // setLocalDescription error
}
);
}
I have a video call application based on WebRTC. It is working as expected. However when call is going on, if I disconnect and connect back audio device (mic + speaker), only speaker part is working. The mic part seems to be not working - the other side can't hear anymore.
Is there any way to inform WebRTC to take audio input again once audio device is connected back?
Is there any way to inform WebRTC to take audio input again once audio device is connected back?
Your question appears simple—the symmetry with speakers is alluring—but once we're dealing with users who have multiple cameras and microphones, it's not that simple: If your user disconnects their bluetooth headset they were using, should you wait for them to reconnect it, or immediately switch to their laptop microphone? If the latter, do you switch back if they reconnect it later? These are application decisions.
The APIs to handle these things are: primarily the ended and devicechange events, and the replaceTrack() method. You may also need the deviceId constraint, and the enumerateDevices() method to a handle multiple devices.
However, to keep things simple, let's take the assumptions in your question at face value to explore the APIs:
When the user unplugs their sole microphone (not their camera) mid-call, our job is to resume conversation with it when they reinsert it, without dropping video:
First, we listen to the ended event to learn when our local audio track drops.
When that happens, we listen for a devicechange event to detect re-insertion (of anything).
When that happens, we could check what changed using enumerateDevices(), or simply try getUserMedia again (microphone only this time).
If that succeeds, use await sender.replaceTrack(newAudioTrack) to send our new audio.
This might look like this:
let sender;
(async () => {
try {
const stream = await navigator.mediaDevices.getUserMedia({video: true, audio: true});
pc.addTrack(stream.getVideoTracks()[0], stream);
sender = pc.addTrack(stream.getAudioTracks()[0], stream);
sender.track.onended = () => navigator.mediaDevices.ondevicechange = tryAgain;
} catch (e) {
console.log(e);
}
})();
async function tryAgain() {
try {
const stream = await navigator.mediaDevices.getUserMedia({audio: true});
await sender.replaceTrack(stream.getAudioTracks()[0]);
navigator.mediaDevices.ondevicechange = null;
sender.track.onended = () => navigator.mediaDevices.ondevicechange = tryAgain;
} catch (e) {
if (e.name == "NotFoundError") return;
console.log(e);
}
}
// Your usual WebRTC negotiation code goes here
The above is for illustration only. I'm sure there are lots of corner cases to consider.
I've created a web app that allows users to do a voice recording and have noticed that there are problems with picking the correct audio input device. FireFox works great but Chrome and Safari don't always record if I use the default way for initializing the audio recording: navigator.mediaDevices.getUserMedia({audio: true}). Because of this, I have to specify which microphone to use like so:
let dD = [];
navigator.mediaDevices.enumerateDevices().then((devices) => {
dD = devices.filter((d) => d.kind === 'audioinput');
try {
// checking if there is a second audio input and select it
// it turns out that it works in most cases for Chrome :/
let audioD = dD[1] === undefined ? dD[0] : dD[1];
navigator.mediaDevices.getUserMedia({audio: { deviceId: audioD.deviceId }})
.then(function(stream){
startUserMedia(stream);
})
.catch(function(err) {
console.log(`${err.name}: ${err.message}`);
});
} catch (err) {
console.log(`${err.name}: ${err.message}`);
}
});
The problem with this code is that it only works sometimes. I still get reports from users complaining that the recording is not working for them or the recording is empty (which might mean that I'm using the wrong audio input).
I assume that my code is not the correct way to get the active (or let's say the working) audio input devices. How I can check which audio input is the correct one?
I want to change from a audio/video stream to a "screensharing" stream:
peerConnection.removeStream(streamA) // __o_j_sep... in Screenshots below
peerConnection.addStream(streamB) // SSTREAM in Screenshots below
streamA is a video/audio stream coming from my camera and microphone.
streamB is the screencapture I get from my extension.
They are both MediaStream objects that look like this:
* 1 Remark
But if I remove streamA from the peerConnection and addStream(streamB) like above nothing seems to happen.
The following works as expected (the stream on both ends is removed and re-added)
peerConnection.removeStream(streamA) // __o_j_sep...
peerConnection.addStream(streamA) // __o_j_sep...
More Details
I have found this example which does "the reverse" (Switch from screen capture to audio/video with camera) but can't spot a significant difference.
The peerConnection RTCPeerConnection object is actually created by this SIPML library source code available here. And I access it like this:
var peerConnection = stack.o_stack.o_layer_dialog.ao_dialogs[1].o_msession_mgr.ao_sessions[0].o_pc
(Yes, this does not look right, but there is no official way to get access to the Peer Connection see discussion here) and here.
Originally I tried to just (ex)change the videoTracks of streamA with the videoTrack of streamB. See question here. It was suggested to me that I should try to renegotiate the Peer Connection (by removing/adding Streams to it), because the addTrack does not trigger a re-negotitation.
I've also asked for help here but the maintainer seems very busy and didn't have a chance to respond yet.
* 1 Remark: Why does streamB not have a videoTracks property? The stream plays in an HTML <video> element and seems to "work". Here is how I get it:
navigator.webkitGetUserMedia({
audio: false,
video: {
mandatory: {
chromeMediaSource: 'desktop',
chromeMediaSourceId: streamId,
maxWidth: window.screen.width,
maxHeight: window.screen.height
//, maxFrameRate: 3
}
}
// success callback
}, function(localMediaStream) {
SSTREAM = localMediaStream; //streamB
// fail callback
}, function(error) {
console.log(error);
});
it also seems to have a videoTrack:
I'm running:
OS X 10.9.3
Chrome Version 35.0.1916.153
To answer your first question, when modifying the MediaStream in an active peerconnection, the peerconnection object will fire an onnegotiationneeded event. You need to handle that event and re-exchange your SDPs. The main reason behind this is so that both parties know what streams are being sent between them. When the SDPs are exchanged, the mediaStream ID is included, and if there is a new stream with a new ID(event with all other things being equal), a re-negotiation must take place.
For you second question(about SSTREAM). It does indeed contain video tracks but there is no videotrack attribute for webkitMediaStreams. You can grab tracks via their ID, however.
Since there is the possibility of having numerous tracks for each media type, there is no single attribute for a videotrack or audiotrack but instead an array of such. The .getVideoTracks() call returns an array of the current videoTracks. So, you COULD grab a particular video track through indicating its index .getVideoTracks()[0].
I do something similar, on clicking a button I remove the active stream and add the other.
This is the way I do it and it works for me perfectly,
_this.rtc.localstream.stop();
_this.rtc.pc.removeStream(_this.rtc.localstream);
gotStream = function (localstream_aud){
var constraints_audio={
audio:true
}
_this.rtc.localstream_aud = localstream_aud;
_this.rtc.mediaConstraints= constraints_audio;
_this.rtc.createOffer();
}
getUserMedia(constraints_audio, gotStream);
gotStream = function (localstream){
var constraints_screen={
audio:false,
video:{
mandatory:{
chromeMediaSource: 'screen'
}
}
}
_this.rtc.localstream = localstream;
_this.rtc.mediaConstraints=constraints_video;
_this.rtc.createStream();
_this.rtc.createOffer();
}
getUserMedia(constraints_video, gotStream);
Chrome doesn't allow audio along with the 'screen' so I create a separate stream for it.
You will need to do the opposite in order to switch back to your older video stream or actually to any other stream you want.
Hope this helps