I am developing a 3-part system for a remote vehicle to connect to a base station via WebRTC, which will pipe video from the remote vehicle to a remote station (also using an RTC Peer Connection). So, the architecture looks like this:
Vehicle -- PC --> Base Station -- PC --> Remote Station
Where PC is a peer connection. Video is sent from vehicle to Base Station using RTCPeerConnection's addTrack method. The video track must then be sent from Base Station (using addTrack method) to Remote Station via Remote Station's onTrack handler. The problem is, NodeJS (which Base Station is implemented in) does not natively support MediaStreams (Which are only available via a 3rd party library) and I am not able to find a way to trigger the onTrack handler in Remote Station. If Vehicle-Remote Station connect, the video comes through perfectly, but the Base Station is not triggering the appropriate event in Remote Station when piping. I added the package to support media streams, but this produces an error trying to locally store the stream and/or track (see tempStream/tempTrack variables),and does not seem to resolve the issue of the ontrack handler not being triggered in the Remote Station and displaying the video.
Here is the handler for the vehicle adding a track to pc, and trying to pipe that to rcspc, the RTCPeerConnection object for Remote Station. Any tips or insight is greatly appreciated.
//Event listener for new track
/**
* When pc receives a track (video), pipe it over to Remote Station
* using the addTrack method. (Currently NOT piping the video appropriately)
* #param {} event
*/
pc.ontrack = (event) =>{
console.log("--------------------RECEIVED TRACK--------------------");
if (event.streams && event.streams[0]) {
console.log('received track from vehicle');
//add these streams to a temporary holding place
tempStream.push(event.streams);
tempTrack = event.track;
//checking that rcspc peer connection has configured the remote description/finished
//signaling process
if(rcspc.remoteDescription){
//changing to tempStream/tempTrack does not resolve the issue here.
rcspc.addTrack(event.track, event.streams[0]);
}
} else {
if (!remoteStream) {
remoteStream = new MediaStream();
//videoElem.srcObject = remoteStream;
console.log('created new remote stream!');
}
remoteStream.addTrack(event.track);
}
};
To onTrack to fire on the other side of rcspc connection, you need offer/answer exchange after you've called addTrack. Otherwise the receiving side won't know what codec is used for the media you sending.
From mozilla docs:
Note: Adding a track to a connection triggers renegotiation by firing a negotiationneeded event. See Starting negotiation for details.
So I would try creating a new offer on rcspc after you add the track. Then wait for a new answer and set remote description. Then onTrack should fire on RS-side.
Or you can do offer-answer exchange in onnegotiationneeded callback, should make no difference.
Related
I am using openvidu for one of my projects. Say there are 2 users User-A and User-B. Both are publishers and wish to subscribe to each others videos. User-A initiated conference session. Once User-B join streamCreated is triggered for User-B in javascript on User-A browser. But User-B is unaware that User-A exists. For this we pull exisiting publishers for get-session api call, but that does not give access to stream. How to get access to stream, all we have is stream id
User B will always receive (just after successfully calling Session.connect() method) one connectionCreated event for each user already connected to the session and one streamCreated event for each stream already being published into the session.
I stumbled on a weird issue in a WebRTC webapp. Here's the setup:
Client A and client B send audio via a send-only WebRTC connections to a SFU.
Client C receives via two receive-only connections to that same SFU the audio streams from client A and B and adds them to two different "audio" elements. The routing between these send and receive connections work properly.
Here's the problem:
On refreshing the page, sometimes client C hears audio from both client A and B. But most of the time client C only hears audio from randomly A or B.
It's happening in both firefox and chrome.
Both connections are receiving data (see graph "bitsReceivedPerSecond") but only one connection is outputting audio. Here an example where C could hear A but not B:
Connection Client A -> C:
Connection Client B -> C:
My understanding of these graphs is that the raw WebRTC connection is working fine (data is sent and received) but somehow a connection does not output audio randomly.
Does anyone have a clue how this can happen?
Here is the "ontrack" callback for adding the streams to the audio elements. The Logs do appear correctly for each connection.
gotRemoteStream(e) {
Logger.log("Remote Streams: #"+e.streams.length);
if (this.audioElement.srcObject !== e.streams[0]) {
Logger.log("Received remote Stream with tracks to audio: " + this.audioElement.id);
this.audioElement.srcObject = e.streams[0];
}
}
A single audio element can only play one audio track at a time.
You said there were two incoming audio tracks, so if this.audioElement is the same element, then each call to gotRemoteStream will race setting srcObject, one overwriting the other.
This is most likely why you only hear one or the other.
The simplest solution might be to key off the stream associations sent, since they'll tend to differ:
const elements = {};
gotRemoteStream({streams: [stream]}) {
if (!elements[stream.id]) {
elements[stream.id] = this.audioElementA.srcObject ? this.audioElementB
: this.audioElementA;
}
elements[stream.id].srcObject = stream;
}
This should work for two incoming connections. More than that is left as an exercise.
I'm programming using PeerJS and, because PeerJS is already very outdated, I'm testing everything on Firefox version 38. I know it is not the best to do, but I don't have time for more. So, I'm trying to do the following:
Peer1 transmits audio and video to Peer2.
Peer2 wants to transmit to Peer3 the video that receives from Peer1 but not the audio. Peer2 wants to send it's own audio.
Basically, Peer3 will receive the video from Peer1 (with Peer2 relaying it) and audio from Peer2, but it will arrive to him all together, like if it was a normal WebRTC call.
I do this like this:
var mixStream = remoteStream;
var audioMixStream = mixStream.getAudioTracks();
mixStream = mixStream.removeStream(audioMixStream);
var mixAudioStream = localStream;
var audioMixAudioStream = mixAudioStream.getAudioTracks();
mixStream.addTrack(audioMixAudioStream);
//Answer the call automatically instead of prompting user.
call.answer(window.audioMixAudioStream);
But I'm getting an error on removeStream. Maybe I will get more errors after that one, but now I'm stuck on this one.
Can someone please tell what I should use instead of removeStream?
P.S.: I already used removeTrack too and got an error too.
I've been having trouble establishing a WebRTC session and am trying to simplify the issue as much as possible. So I've written up a simple copy & paste example, where you just paste the offer/answer into webforms and click submit.
The HTML+JS, all in one file, can be found here: http://pastebin.com/Ktmb3mVf
I'm on a local network, and am therefore removing the ICE server initialisation process to make this example as bare-bones as possible.
Here are the steps I'm carrying out in the example:
Page 1
Page 1 (loads page), enters a channel name (e.g. test) and clicks create.
A new Host object is created, new PeerConnection() and createDataChannel are called.
createOffer is called, and the resulting offerSDP is pasted into the offer textarea.
Page 2
Copy offerSDP from Page 1 and paste into offer textarea on Page 2, click join.
New Guest object is created, PeerConnection and an ondatachannel handler is set.
setRemoteDescription is called for the Guest object, with the offerSDP data.
createAnswer is called and the result is pasted into the answer textarea box.
Page 1
The answerSDP is copied from Page 2 and pasted into the answer textarea of Page 1, submit answer is clicked.
Host.setRemoteDescription is called with the answerSDP data. This creates a SessionDescription, then peer.setRemoteDescription is called with the resulting data.
Those are the steps carried out in the example, but it seems I'm missing something critical. After the offerer's remoteDescription is set with the answerSDP, I try to send a test message on the dataChannel:
Chrome 40
"-- complete"
> host.dataChannel.send('hello world');
VM1387:2 Uncaught DOMException: Failed to execute 'send' on 'RTCDataChannel': RTCDataChannel.readyState is not 'open'
Firefox 35
"-- complete"
ICE failed, see about:webrtc for more details
> host.dataChannel.send('hello world');
InvalidStateError: An attempt was made to use an object that is not, or is no longer, usable
I also had a more complicated demo operating, with a WebSocket signalling server, and ICE candidates listed, but was getting the same error. So I hope this simplification can help to track down the issue.
Again, the single-file code link: http://pastebin.com/Ktmb3mVf
To enable webRTC clients to connect to each other, you need ICE. While STUN and TURN, which you don't need for such a test, are part of that, even without these helpers you still need to use ICE to tell the other end which IP/port/protocol to connect to.
There are two ways to do this: Google's "trickle ice", where the SDP (answer/offer) is passed on without any ICE candidates. These are then transported over a separate signaling layer and added as they are discovered. This speeds up the connection process, as ICE takes time and some late ICE candidates might not be needed.
The classic method is to wait until all ICE candidates have been gathered, and then generate the SDP with these already included.
I have modified your latest version to do that: http://pastebin.com/g2YVvrRd
You also need to wait for the datachannel/connection to become available before being able to use it, so I've moved the sending of the message to the channels onopen event.
The significant changes to the original code:
The interface callbacks were removed from Host.prototype.createOffer and Guest.prototype.createAnswer, instead we attach the provided callback function to the respective objects for later use.
self.cb = cb;
Both Host and Guest have an added ICE handler for the PeerConnection:
var self = this;
this.peer.onicecandidate = function (event) {
// This event is called for every discovered ICE candidate.
// If this was trickle ICE, you'd pass them on here.
// An event without an actual candidate signals the end of the
// ICE collection process, which is what we need for classic ICE.
if (!event.candidate) {
// We fetch the up to date description from the PeerConnection
// It now contains lines with the available ICE candidates
self.offer = self.peer.localDescription;
// Now we move on to the deferred callback function
self.cb(self.offer);
}
}
For the guest self.offer becomes self.answer
The interface handler $("#submitAnswer").click() does not send the message anymore, instead it is send when the datachannel is ready in the onopen event defined in setChannelEvents().
channel.onopen = function () {
console.log('** channel.onopen');
channel.send('hello world!');
};
Once I have exchanged session description between two peers. How can I allow the user to prevent audio and/or video broadcast? Do I need to exchange the Session Descriptions again?
"Broadcast" is probably not the correct term since PeerConnections are always unicast peer-to-peer.
To acquire an audio/video stream from the user's devices you call getUserMedia() and to send these to the other peer you call addStream() on the PeerConnection object.
So to allow the user to not send the acquired stream just let her choose whether to call addStream() or not. E.g. show a popup saying "Send Audio/Video to the other user?". If she chooses "Yes" call addStream() on the PeerConnection object, otherwise just don't call it.
EDIT to answer question in comment:
If you'd like to stop sending of audio and/or video just call removeStream() on the PeerConnection object with the stream to remove as parameter. This will per the API spec trigger a renegotiation.
See http://dev.w3.org/2011/webrtc/editor/webrtc.html#interface-definition for further details.