I want to change from a audio/video stream to a "screensharing" stream:
peerConnection.removeStream(streamA) // __o_j_sep... in Screenshots below
peerConnection.addStream(streamB) // SSTREAM in Screenshots below
streamA is a video/audio stream coming from my camera and microphone.
streamB is the screencapture I get from my extension.
They are both MediaStream objects that look like this:
* 1 Remark
But if I remove streamA from the peerConnection and addStream(streamB) like above nothing seems to happen.
The following works as expected (the stream on both ends is removed and re-added)
peerConnection.removeStream(streamA) // __o_j_sep...
peerConnection.addStream(streamA) // __o_j_sep...
More Details
I have found this example which does "the reverse" (Switch from screen capture to audio/video with camera) but can't spot a significant difference.
The peerConnection RTCPeerConnection object is actually created by this SIPML library source code available here. And I access it like this:
var peerConnection = stack.o_stack.o_layer_dialog.ao_dialogs[1].o_msession_mgr.ao_sessions[0].o_pc
(Yes, this does not look right, but there is no official way to get access to the Peer Connection see discussion here) and here.
Originally I tried to just (ex)change the videoTracks of streamA with the videoTrack of streamB. See question here. It was suggested to me that I should try to renegotiate the Peer Connection (by removing/adding Streams to it), because the addTrack does not trigger a re-negotitation.
I've also asked for help here but the maintainer seems very busy and didn't have a chance to respond yet.
* 1 Remark: Why does streamB not have a videoTracks property? The stream plays in an HTML <video> element and seems to "work". Here is how I get it:
navigator.webkitGetUserMedia({
audio: false,
video: {
mandatory: {
chromeMediaSource: 'desktop',
chromeMediaSourceId: streamId,
maxWidth: window.screen.width,
maxHeight: window.screen.height
//, maxFrameRate: 3
}
}
// success callback
}, function(localMediaStream) {
SSTREAM = localMediaStream; //streamB
// fail callback
}, function(error) {
console.log(error);
});
it also seems to have a videoTrack:
I'm running:
OS X 10.9.3
Chrome Version 35.0.1916.153
To answer your first question, when modifying the MediaStream in an active peerconnection, the peerconnection object will fire an onnegotiationneeded event. You need to handle that event and re-exchange your SDPs. The main reason behind this is so that both parties know what streams are being sent between them. When the SDPs are exchanged, the mediaStream ID is included, and if there is a new stream with a new ID(event with all other things being equal), a re-negotiation must take place.
For you second question(about SSTREAM). It does indeed contain video tracks but there is no videotrack attribute for webkitMediaStreams. You can grab tracks via their ID, however.
Since there is the possibility of having numerous tracks for each media type, there is no single attribute for a videotrack or audiotrack but instead an array of such. The .getVideoTracks() call returns an array of the current videoTracks. So, you COULD grab a particular video track through indicating its index .getVideoTracks()[0].
I do something similar, on clicking a button I remove the active stream and add the other.
This is the way I do it and it works for me perfectly,
_this.rtc.localstream.stop();
_this.rtc.pc.removeStream(_this.rtc.localstream);
gotStream = function (localstream_aud){
var constraints_audio={
audio:true
}
_this.rtc.localstream_aud = localstream_aud;
_this.rtc.mediaConstraints= constraints_audio;
_this.rtc.createOffer();
}
getUserMedia(constraints_audio, gotStream);
gotStream = function (localstream){
var constraints_screen={
audio:false,
video:{
mandatory:{
chromeMediaSource: 'screen'
}
}
}
_this.rtc.localstream = localstream;
_this.rtc.mediaConstraints=constraints_video;
_this.rtc.createStream();
_this.rtc.createOffer();
}
getUserMedia(constraints_video, gotStream);
Chrome doesn't allow audio along with the 'screen' so I create a separate stream for it.
You will need to do the opposite in order to switch back to your older video stream or actually to any other stream you want.
Hope this helps
Related
My team is adapting the sipml5 library to create a html5 softphone for use in our organization. The full repository is here: https://github.com/L1kMakes/sipml5-ng . We have the code working well; audio and video calls work flawlessly. In the original code we forked from (which was from like 2012) screen sharing was accomplished with a browser plugin, but HTML 5 and WebRTC have changed to allow this to be done with just JavaScript now.
I am having difficulty adapting the code to accommodate this. I am starting with the code here on line 828: https://github.com/L1kMakes/sipml5-ng/blob/master/src/tinyMEDIA/src/tmedia_session_jsep.js This works, though without audio. That makes sense as the only possible audio stream from a screen share is the screen audio, not the mic audio. I am attempting to initialize an audio stream from getUserMedia, grab a video stream from getDisplayMedia, and present that to the client as a single mediaStream. Here's my adapted code:
if ( this.e_type == tmedia_type_e.SCREEN_SHARE ) {
// Plugin-less screen share using WebRTC requires "getDisplayMedia" instead of "getUserMedia"
// Because of this, audio constraints become limited, and we have to use async to deal with
// the promise variable for the mediastream. This is a change since Chrome 71. We are able
// to use the .then aspect of the promise to call a second mediaStream, then attach the audio
// from that to the video of our second screenshare mediaStream, enabling plugin-less screen
// sharing with audio.
let o_stream = null;
let o_streamAudio = null;
let o_streamVideo = null;
let o_streamAudioTrack = null;
let o_streamVideoTrack = null;
try {
navigator.mediaDevices.getDisplayMedia(
{
audio: false,
video: !!( this.e_type.i_id & tmedia_type_e.VIDEO.i_id ) ? o_video_constraints : false
}
).then(o_streamVideo => {
o_streamVideoTrack = o_streamVideo.getVideoTracks()[0];
navigator.mediaDevices.getUserMedia(
{
audio: o_audio_constraints,
video: false
}
).then(o_streamAudio => {
o_streamAudioTrack = o_streamAudio.getAudioTracks()[0];
o_stream = new MediaStream( [ o_streamVideoTrack , o_streamAudioTrack ] );
tmedia_session_jsep01.onGetUserMediaSuccess(o_stream, This);
});
});
} catch ( s_error ) {
tmedia_session_jsep01.onGetUserMediaError(s_error, This);
}
} else {
try {
navigator.mediaDevices.getUserMedia(
{
audio: (this.e_type == tmedia_type_e.SCREEN_SHARE) ? false : !!(this.e_type.i_id & tmedia_type_e.AUDIO.i_id) ? o_audio_constraints : false,
video: !!(this.e_type.i_id & tmedia_type_e.VIDEO.i_id) ? o_video_constraints : false // "SCREEN_SHARE" contains "VIDEO" flag -> (VIDEO & SCREEN_SHARE) = VIDEO
}
).then(o_stream => {
tmedia_session_jsep01.onGetUserMediaSuccess(o_stream, This);
});
} catch (s_error ) {
tmedia_session_jsep01.onGetUserMediaError(s_error, This);
}
}
My understanding is, o_stream should represent the resolved mediaStream tracks, not a promise, when doing a screen share. On the other end, we are using the client "MicroSIP." When making a video call, when the call is placed, I get my video preview locally in our web app, then when the call is answered the MicroSIP client gets a green square for a second, then resolves to my video. When I make a screen share call, my local web app sees the local preview of the screen share, but upon answering the call, my MicroSIP client just gets a green square and never gets the actual screen share.
The video constraints for both are the same. If I add debugging output to get more descriptive of what is actually in the media streams, they appear identical as far as I can tell. I made a test video call and a test screen share call, captured debug logs from each and held them side by side in notepad++...all appears to be identical save for the explicit debug describing the traversal down the permission request tree with "GetUserMedia" and "GetDisplayMedia." I can't really post the debug logs here as cleaning them up of information from my organization would leave them pretty barren. Save for the extra debug output on the "getDisplayMedia" call before "getUserMedia", timestamps, and uniqueID's related to individual calls, the log files are identical.
I am wondering if the media streams are not resolving from their promises before the "then" is completed, but asynchronous javascript and promises is still a bit over my head. I do not believe I should convert this function to async, but I have nothing else to debug here; the mediaStream is working as I can see it locally, but I'm stumped on figuring out what is going on with the remote send.
The solution was...nothing, the code was fine. It turns out the recipient SIP client we were using had an issue where it just aborts if it gets video larger than 640x480.
I have a problem, after subscribe to a remote stream I call a method - setVideoProfile() then there is no sound after that
this.client.on('stream-subscribed', (event) => {
let remoteStream = event.stream;
remoteStream.play(bindTag, {fit: 'contain'});
remoteStream.setAudioVolume(100);
remoteStream.setVideoProfile('120p_1');
});
when I comment //remoteStream.setVideoProfile('120p_1'); the sound works
I use AgoraRTC v2.8.0
has anyone encountered this?
setAudioVolume accepts only a number in the range [0,100] - both inclusive
Setting 0 mutes the audio and setting to 100 is maximum volume.
You can't pass in an arbitrary string like '120p_1'. That's why you do not hear any sound.
Update (from comments below)
You cant' set video profiles on remote streams. You can set this only on local streams. You should instead use dual streams and set a fallback mode if you want the user to receive a low-quality version of the remote video
client.enableDualStream(function() {
console.log("Enable dual stream success!")
}, function(err) {
console.log(err)
});
// Configuration for the receiver. When the network condition is poor, receive audio only.
client.setStreamFallbackOption(remoteStream, 2);
see full documentation:
video fallback https://docs.agora.io/en/Interactive%20Broadcast/fallback_web?platform=Web
dual streams https://docs.agora.io/en/Interactive%20Broadcast/API%20Reference/web/interfaces/agorartc.client.html#enabledualstream
I am able to make video call between two individuals but now what I wanted to do is that I want to add a button to mute and unmute the audio while streaming video I search a lot on the internet but nothing works out for me.
then I found the configuration in options property of WebRtcPeerSendrecv which enables audio on connection but the problem is how I update or toggle it during the stream.
here is my code
var videoInput = document.getElementById('videoInput');
var videoOutput = document.getElementById('videoOutput');
var constraints = {
audio: true, //how do I toggle this during the stream.
video: {
width: 640,
framerate: 15
}
};
var options = {
localVideo: videoInput,
remoteVideo: videoOutput,
onicecandidate : onIceCandidate,
mediaConstraints: constraints
};
var webRtcPeer = kurentoUtils.WebRtcPeer.WebRtcPeerSendrecv(options, function(error) {
if(error) return onError(error)
this.generateOffer(onOffer)
});
I am also free for another alternative which helps me to integrate mute/unmute functionality in my stream.
I am stuck at this so bad for so long any kind of help is appreciable thanks in advance.
I found the solution there is a property in webRtcPeer after making peer connection which allows us to manipulate the video stream audioEnabled which is a boolean what I did is I just change its value to true/false as per my requirement
like this
webRtcPeer.audioEnabled = false //by default it will be false
I have a video call application based on WebRTC. It is working as expected. However when call is going on, if I disconnect and connect back audio device (mic + speaker), only speaker part is working. The mic part seems to be not working - the other side can't hear anymore.
Is there any way to inform WebRTC to take audio input again once audio device is connected back?
Is there any way to inform WebRTC to take audio input again once audio device is connected back?
Your question appears simple—the symmetry with speakers is alluring—but once we're dealing with users who have multiple cameras and microphones, it's not that simple: If your user disconnects their bluetooth headset they were using, should you wait for them to reconnect it, or immediately switch to their laptop microphone? If the latter, do you switch back if they reconnect it later? These are application decisions.
The APIs to handle these things are: primarily the ended and devicechange events, and the replaceTrack() method. You may also need the deviceId constraint, and the enumerateDevices() method to a handle multiple devices.
However, to keep things simple, let's take the assumptions in your question at face value to explore the APIs:
When the user unplugs their sole microphone (not their camera) mid-call, our job is to resume conversation with it when they reinsert it, without dropping video:
First, we listen to the ended event to learn when our local audio track drops.
When that happens, we listen for a devicechange event to detect re-insertion (of anything).
When that happens, we could check what changed using enumerateDevices(), or simply try getUserMedia again (microphone only this time).
If that succeeds, use await sender.replaceTrack(newAudioTrack) to send our new audio.
This might look like this:
let sender;
(async () => {
try {
const stream = await navigator.mediaDevices.getUserMedia({video: true, audio: true});
pc.addTrack(stream.getVideoTracks()[0], stream);
sender = pc.addTrack(stream.getAudioTracks()[0], stream);
sender.track.onended = () => navigator.mediaDevices.ondevicechange = tryAgain;
} catch (e) {
console.log(e);
}
})();
async function tryAgain() {
try {
const stream = await navigator.mediaDevices.getUserMedia({audio: true});
await sender.replaceTrack(stream.getAudioTracks()[0]);
navigator.mediaDevices.ondevicechange = null;
sender.track.onended = () => navigator.mediaDevices.ondevicechange = tryAgain;
} catch (e) {
if (e.name == "NotFoundError") return;
console.log(e);
}
}
// Your usual WebRTC negotiation code goes here
The above is for illustration only. I'm sure there are lots of corner cases to consider.
everything just works fine (createOffer, createAnswer, iceCandidates, ...), but then the incoming remoteStream has 2 tracks, the audioTrack which is working and the videoTrack which is not working with readyState: "muted".
if i do createOffer on pageload and then with start call do crreateOffer again with the same peerConnection, also the video displays correctly (but then i'll get in firefox the "Cannot create offer in state have-local-offer".
any ideas what could be the problem? (code is quite too complex for showing here)
Can you the local video on both the sides?
-> In a pc only one browser will get access to camera at any time either chrome/firefox)
-> Try calling between two different machines or chrome-to-chrome or firefox-to-firefox.
"Cannot create offer in state have-local-offer"
It mean you already created an offer and trying to create again without setting remote answer.
Calling createOffer again is not good idea. Make sure you create the offer in the following way(synchronously).
After receiving the stream gUM callback, then add it peerConnection.
After adding the stream then create the offer, in case of answer set remote offer also before creating answer.
I was having this issue while preparing the MediaStream on the iOS app. It turns out the the I wasn't passing the correct RTCMediaConstraints.
The problem resolved after I switch and use [RTCMediaConstraints defaultConstraints].
For example:
- (RTCVideoTrack *)createLocalVideoTrack {
RTCVideoTrack* localVideoTrack = nil;
RTCMediaConstraints *mediaConstraints = [RTCMediaConstraints defaultConstraints];
RTCAVFoundationVideoSource *source =
[[self peerConnectionFactory] avFoundationVideoSourceWithConstraints:mediaConstraints];
localVideoTrack =
[[self peerConnectionFactory] videoTrackWithSource:source
trackId:kARDVideoTrackId];
return localVideoTrack;
}