So far I have successfully established (running node.js server) an RTC connection between two peers with a datachannel. I can send data back and forth.
I have also successfully streamed the webcam from one peer to another and vice versa.
How exactly am I doing this?
Both peers do this:
function handleRemoteStreamAdded(event) {
console.log('Remote stream added.');
remoteStream = event.stream;
remoteVideo.srcObject = remoteStream;
}
function gotStream(stream){
...
pc.addStream(stream);
...
}
navigator.mediaDevices.getUserMedia(constraints).then(gotStream).catch(err);
...
pc = new RTCPeerConnection();
...
pc.onaddstream = handleRemoteStreamAdded;
So I basically say that whenever I add my own stream (pc.addStream) then go to handleRemoteStreamAdded. It all works fine.
But what I really want to do as a next step is to add a button to each client and give each of them the option whether or not they want to stream their cam to the other side. If they want to, then the stream should start automatically on the other end. Unfortunately, I just can't figure out how to.
Theoretically, what I thought is to add an Eventlistener to a button and then the event triggers:
navigator.mediaDevices.getUserMedia(constraints).then(gotStream).catch(err);
By doing this I basically also did pc.addStream(stream); via function gotStream. Then I send a message to the other end like "display my cam" and by receiving this message on the other end, that other peer should somehow trigger handleRemoteStreamAdded. But within this function there is the pre-defined event that I can only "access" locally via pc.onaddstream = handleRemoteStreamAdded;
How can I automatically start streaming the other side's cam as soon as I either get a message like "display my cam" or by some event?
are you creating another offer and doing a new signaling exchange after calling pc.addStream? (which fwiw is deprecated; prefer addTrack and ontrack)
See https://webrtc.github.io/samples/src/content/peerconnection/upgrade/ for a similar thing adding video to an audio-only call.
Related
I've set up a workspace in Flex to handle incoming SMS contacts from customers. What I'm trying to do is enable an audio notification that a new SMS message has come into Flex. I'm working on a Flex Plugin to do this.
What I've done is added a listener for a new reservation being created.
If a new reservation has been created I'm trying to play an audio file as the notification. I've enabled error logging but the code is not triggering any errors.
init(flex, manager) {
let ringer = new Audio("*.mp3");
ringer.loop = true;
const resStatus = ["accepted","rejected","rescinded","timeout"];
manager.workerClient.on("reservationCreated", function(reservation) {
if (reservation.task.taskChannelUniqueName === 'sms') {
ringer.play()
};
resStatus.forEach((e) => {
reservation.on(e, () => {
ringer.pause()'''
I was expecting the mp3 to play if a new reservation was created with a taskchanneldefinition name of sms. New sms messages come into the sms channel. When running on Flex, no sound plays and no errors are being logged.
try playing sounds outside this method, does it work?
a few possible problems that you may face:
1) new Audio("*.mp3"), do you load a sound here properly? i think something is wrong with this one
if not
2) you likely need to interact with flex-ui as said herehttps://www.twilio.com/docs/flex/audio-player#troubleshooting
here is an example that i have and it works:
manager.chatClient.on("messageAdded", () => {
//some stuff going on here
new Audio(NEW_MESSAGE_AUDIO).play();
});
where NEW_MESSAGE_AUDIO is a data:audio/mpeg;base64 file
I have a video call application based on WebRTC. It is working as expected. However when call is going on, if I disconnect and connect back audio device (mic + speaker), only speaker part is working. The mic part seems to be not working - the other side can't hear anymore.
Is there any way to inform WebRTC to take audio input again once audio device is connected back?
Is there any way to inform WebRTC to take audio input again once audio device is connected back?
Your question appears simple—the symmetry with speakers is alluring—but once we're dealing with users who have multiple cameras and microphones, it's not that simple: If your user disconnects their bluetooth headset they were using, should you wait for them to reconnect it, or immediately switch to their laptop microphone? If the latter, do you switch back if they reconnect it later? These are application decisions.
The APIs to handle these things are: primarily the ended and devicechange events, and the replaceTrack() method. You may also need the deviceId constraint, and the enumerateDevices() method to a handle multiple devices.
However, to keep things simple, let's take the assumptions in your question at face value to explore the APIs:
When the user unplugs their sole microphone (not their camera) mid-call, our job is to resume conversation with it when they reinsert it, without dropping video:
First, we listen to the ended event to learn when our local audio track drops.
When that happens, we listen for a devicechange event to detect re-insertion (of anything).
When that happens, we could check what changed using enumerateDevices(), or simply try getUserMedia again (microphone only this time).
If that succeeds, use await sender.replaceTrack(newAudioTrack) to send our new audio.
This might look like this:
let sender;
(async () => {
try {
const stream = await navigator.mediaDevices.getUserMedia({video: true, audio: true});
pc.addTrack(stream.getVideoTracks()[0], stream);
sender = pc.addTrack(stream.getAudioTracks()[0], stream);
sender.track.onended = () => navigator.mediaDevices.ondevicechange = tryAgain;
} catch (e) {
console.log(e);
}
})();
async function tryAgain() {
try {
const stream = await navigator.mediaDevices.getUserMedia({audio: true});
await sender.replaceTrack(stream.getAudioTracks()[0]);
navigator.mediaDevices.ondevicechange = null;
sender.track.onended = () => navigator.mediaDevices.ondevicechange = tryAgain;
} catch (e) {
if (e.name == "NotFoundError") return;
console.log(e);
}
}
// Your usual WebRTC negotiation code goes here
The above is for illustration only. I'm sure there are lots of corner cases to consider.
Local peer may add camera track and screen track:
pc.addTrack(cameraTrack, stream);
// I want do something like:
// screenTrack.setField('isScreen', true);
pc.addTrack(screenTrack, stream);
Remote peer receives track event:
pc.ontrack = function(evt){
if (evt.track.getField('isScreen') == true){
// do something for the screen track
}
else {
// do something for the user camera
}
}
The question is: how can the remote peer aware of the different track, since the local peer may send in different order?
Is is possible for local user do something like cameraTrack.addMetaData('type', 'camera') or screenTrack.setField('isScreen', true)?
Basically, how to tell extra metadata for track information to remote peers?
I want to change from a audio/video stream to a "screensharing" stream:
peerConnection.removeStream(streamA) // __o_j_sep... in Screenshots below
peerConnection.addStream(streamB) // SSTREAM in Screenshots below
streamA is a video/audio stream coming from my camera and microphone.
streamB is the screencapture I get from my extension.
They are both MediaStream objects that look like this:
* 1 Remark
But if I remove streamA from the peerConnection and addStream(streamB) like above nothing seems to happen.
The following works as expected (the stream on both ends is removed and re-added)
peerConnection.removeStream(streamA) // __o_j_sep...
peerConnection.addStream(streamA) // __o_j_sep...
More Details
I have found this example which does "the reverse" (Switch from screen capture to audio/video with camera) but can't spot a significant difference.
The peerConnection RTCPeerConnection object is actually created by this SIPML library source code available here. And I access it like this:
var peerConnection = stack.o_stack.o_layer_dialog.ao_dialogs[1].o_msession_mgr.ao_sessions[0].o_pc
(Yes, this does not look right, but there is no official way to get access to the Peer Connection see discussion here) and here.
Originally I tried to just (ex)change the videoTracks of streamA with the videoTrack of streamB. See question here. It was suggested to me that I should try to renegotiate the Peer Connection (by removing/adding Streams to it), because the addTrack does not trigger a re-negotitation.
I've also asked for help here but the maintainer seems very busy and didn't have a chance to respond yet.
* 1 Remark: Why does streamB not have a videoTracks property? The stream plays in an HTML <video> element and seems to "work". Here is how I get it:
navigator.webkitGetUserMedia({
audio: false,
video: {
mandatory: {
chromeMediaSource: 'desktop',
chromeMediaSourceId: streamId,
maxWidth: window.screen.width,
maxHeight: window.screen.height
//, maxFrameRate: 3
}
}
// success callback
}, function(localMediaStream) {
SSTREAM = localMediaStream; //streamB
// fail callback
}, function(error) {
console.log(error);
});
it also seems to have a videoTrack:
I'm running:
OS X 10.9.3
Chrome Version 35.0.1916.153
To answer your first question, when modifying the MediaStream in an active peerconnection, the peerconnection object will fire an onnegotiationneeded event. You need to handle that event and re-exchange your SDPs. The main reason behind this is so that both parties know what streams are being sent between them. When the SDPs are exchanged, the mediaStream ID is included, and if there is a new stream with a new ID(event with all other things being equal), a re-negotiation must take place.
For you second question(about SSTREAM). It does indeed contain video tracks but there is no videotrack attribute for webkitMediaStreams. You can grab tracks via their ID, however.
Since there is the possibility of having numerous tracks for each media type, there is no single attribute for a videotrack or audiotrack but instead an array of such. The .getVideoTracks() call returns an array of the current videoTracks. So, you COULD grab a particular video track through indicating its index .getVideoTracks()[0].
I do something similar, on clicking a button I remove the active stream and add the other.
This is the way I do it and it works for me perfectly,
_this.rtc.localstream.stop();
_this.rtc.pc.removeStream(_this.rtc.localstream);
gotStream = function (localstream_aud){
var constraints_audio={
audio:true
}
_this.rtc.localstream_aud = localstream_aud;
_this.rtc.mediaConstraints= constraints_audio;
_this.rtc.createOffer();
}
getUserMedia(constraints_audio, gotStream);
gotStream = function (localstream){
var constraints_screen={
audio:false,
video:{
mandatory:{
chromeMediaSource: 'screen'
}
}
}
_this.rtc.localstream = localstream;
_this.rtc.mediaConstraints=constraints_video;
_this.rtc.createStream();
_this.rtc.createOffer();
}
getUserMedia(constraints_video, gotStream);
Chrome doesn't allow audio along with the 'screen' so I create a separate stream for it.
You will need to do the opposite in order to switch back to your older video stream or actually to any other stream you want.
Hope this helps
I'm experimenting with WebRTC between two browsers using RTCPeerConnection and my own long-polling implementation. I've created demo application, which successfully works with Mozilla Nightly (22), however in Chrome (25), I can't get no remote video and only "empty black video" appears. Is there something wrong in my JS code?
Function sendMessage(message) sends message to server via long-polling and on the other side, it is accepted using onMessage()
var peerConnection;
var peerConnection_config = {"iceServers": [{"url": "stun:23.21.150.121"}]};
// when message from server is received
function onMessage(evt) {
if (!peerConnection)
call(false);
var signal = JSON.parse(evt);
if (signal.sdp) {
peerConnection.setRemoteDescription(new RTCSessionDescription(signal.sdp));
} else {
peerConnection.addIceCandidate(new RTCIceCandidate(signal.candidate));
}
}
function call(isCaller) {
peerConnection = new RTCPeerConnection(peerConnection_config);
// send any ice candidates to the other peer
peerConnection.onicecandidate = function(evt) {
sendMessage(JSON.stringify({"candidate": evt.candidate}));
};
// once remote stream arrives, show it in the remote video element
peerConnection.onaddstream = function(evt) {
// attach media stream to local video - WebRTC Wrapper
attachMediaStream($("#remote-video").get("0"), evt.stream);
};
// get the local stream, show it in the local video element and send it
getUserMedia({"audio": true, "video": true}, function(stream) {
// attach media stream to local video - WebRTC Wrapper
attachMediaStream($("#local-video").get("0"), stream);
$("#local-video").get(0).muted = true;
peerConnection.addStream(stream);
if (isCaller)
peerConnection.createOffer(gotDescription);
else {
peerConnection.createAnswer(gotDescription);
}
function gotDescription(desc) {
sendMessage(JSON.stringify({"sdp": desc}));
peerConnection.setLocalDescription(desc);
}
}, function() {
});
}
My best guess is that there is a problem with your STUN server configuration. To determine if this is the issue, try using google's public stun server stun:stun.l.google.com:19302 (which won't work in Firefox, but should definitely work in Chrome) or test on a local network with no STUN server configured.
Also, verify that your ice candidates are being delivered properly. Firefox doesn't actually generate 'icecandidate' events (it includes the candidates in the offer/answer), so an issue with delivering candidate messages could also explain the discrepancy.
Make sure your video tag attribute autoplay is set to 'autoplay'.