I stumbled on a weird issue in a WebRTC webapp. Here's the setup:
Client A and client B send audio via a send-only WebRTC connections to a SFU.
Client C receives via two receive-only connections to that same SFU the audio streams from client A and B and adds them to two different "audio" elements. The routing between these send and receive connections work properly.
Here's the problem:
On refreshing the page, sometimes client C hears audio from both client A and B. But most of the time client C only hears audio from randomly A or B.
It's happening in both firefox and chrome.
Both connections are receiving data (see graph "bitsReceivedPerSecond") but only one connection is outputting audio. Here an example where C could hear A but not B:
Connection Client A -> C:
Connection Client B -> C:
My understanding of these graphs is that the raw WebRTC connection is working fine (data is sent and received) but somehow a connection does not output audio randomly.
Does anyone have a clue how this can happen?
Here is the "ontrack" callback for adding the streams to the audio elements. The Logs do appear correctly for each connection.
gotRemoteStream(e) {
Logger.log("Remote Streams: #"+e.streams.length);
if (this.audioElement.srcObject !== e.streams[0]) {
Logger.log("Received remote Stream with tracks to audio: " + this.audioElement.id);
this.audioElement.srcObject = e.streams[0];
}
}
A single audio element can only play one audio track at a time.
You said there were two incoming audio tracks, so if this.audioElement is the same element, then each call to gotRemoteStream will race setting srcObject, one overwriting the other.
This is most likely why you only hear one or the other.
The simplest solution might be to key off the stream associations sent, since they'll tend to differ:
const elements = {};
gotRemoteStream({streams: [stream]}) {
if (!elements[stream.id]) {
elements[stream.id] = this.audioElementA.srcObject ? this.audioElementB
: this.audioElementA;
}
elements[stream.id].srcObject = stream;
}
This should work for two incoming connections. More than that is left as an exercise.
Related
I am developing a 3-part system for a remote vehicle to connect to a base station via WebRTC, which will pipe video from the remote vehicle to a remote station (also using an RTC Peer Connection). So, the architecture looks like this:
Vehicle -- PC --> Base Station -- PC --> Remote Station
Where PC is a peer connection. Video is sent from vehicle to Base Station using RTCPeerConnection's addTrack method. The video track must then be sent from Base Station (using addTrack method) to Remote Station via Remote Station's onTrack handler. The problem is, NodeJS (which Base Station is implemented in) does not natively support MediaStreams (Which are only available via a 3rd party library) and I am not able to find a way to trigger the onTrack handler in Remote Station. If Vehicle-Remote Station connect, the video comes through perfectly, but the Base Station is not triggering the appropriate event in Remote Station when piping. I added the package to support media streams, but this produces an error trying to locally store the stream and/or track (see tempStream/tempTrack variables),and does not seem to resolve the issue of the ontrack handler not being triggered in the Remote Station and displaying the video.
Here is the handler for the vehicle adding a track to pc, and trying to pipe that to rcspc, the RTCPeerConnection object for Remote Station. Any tips or insight is greatly appreciated.
//Event listener for new track
/**
* When pc receives a track (video), pipe it over to Remote Station
* using the addTrack method. (Currently NOT piping the video appropriately)
* #param {} event
*/
pc.ontrack = (event) =>{
console.log("--------------------RECEIVED TRACK--------------------");
if (event.streams && event.streams[0]) {
console.log('received track from vehicle');
//add these streams to a temporary holding place
tempStream.push(event.streams);
tempTrack = event.track;
//checking that rcspc peer connection has configured the remote description/finished
//signaling process
if(rcspc.remoteDescription){
//changing to tempStream/tempTrack does not resolve the issue here.
rcspc.addTrack(event.track, event.streams[0]);
}
} else {
if (!remoteStream) {
remoteStream = new MediaStream();
//videoElem.srcObject = remoteStream;
console.log('created new remote stream!');
}
remoteStream.addTrack(event.track);
}
};
To onTrack to fire on the other side of rcspc connection, you need offer/answer exchange after you've called addTrack. Otherwise the receiving side won't know what codec is used for the media you sending.
From mozilla docs:
Note: Adding a track to a connection triggers renegotiation by firing a negotiationneeded event. See Starting negotiation for details.
So I would try creating a new offer on rcspc after you add the track. Then wait for a new answer and set remote description. Then onTrack should fire on RS-side.
Or you can do offer-answer exchange in onnegotiationneeded callback, should make no difference.
Hey Folks
I am new on WebRTC and have a question about how to start and stop a WebRTC track to save bandwidth.
My backend provide multiple video streams so my SDK looks like this:
a=ssrc:3671328831 msid:user3339856498#host-602bca webrtctransceiver2
a=ssrc:3671328831 cname:user3339856498#host-602bca
a=mid:video0
a=ssrc:3671267278 msid:user3339856498#host-602bca webrtctransceiver3
a=ssrc:3671267278 cname:user3339856498#host-602bca
a=mid:video1
My goal is to let the user choose which stream they can look,
my problem is that, tracks they are added from RTCPeerConnection already send data.
when I set the track.enabled=false it also send data with full bandwith.
Is there a way to control start and stop they track so that no data is transmittet (RTCRtpReceiver)
thanks
Sorry if the question is very stupid
I am trying to show the stream from a person to another user using js
I have tried putting it in a cookie but it doesnt work even then.Even when the object in video is the same as the other
File 1
var video = document.querySelector("#videoElement");
var x=document.cookie
console.log(x)
video.srcObject = x;
File 2
var video = document.querySelector("#videoElement");
if (navigator.mediaDevices.getUserMedia) {
navigator.mediaDevices.getUserMedia({ video: true })
.then(function (stream) {
video.srcObject = stream;
document.cookie=video.srcObject
console.log(stream,video.srcObject)
})
.catch(function (err0r) {
console.log("Something went wrong!");
});
console.log(stream,video.srcObject)
}
I would like to just for now show it on two pages but for future what language should i use to store the video if you know you can share it
A cookie is not a centralized universal camera access repository on the web. Thank goodness.
A MediaStream is a local resource object representing active camera use, not a shareable URL.
This object lives solely in your local JS page, and isn't addressable on the web.
Since it doesn't live on any server, transporting the graphical bits from your camera to a friend's system, requires quite a bit of heavy lifting. This includes establishing an RTCPeerConnection which is the domain of WebRTC:
navigator.mediaDevices.getUserMedia({ video: true })
.then(function (stream) {
const iceServers = [{urls: "stun:stun.l.google.com:19302"}];
const pc = new RTCPeerConnection({iceServers});
for (const track of stream.getTracks())
pc.addTrack(track, stream);
/* lots of other WebRTC negotiation code */
You'll also typically need a server of some kind, both to solve discovery, i.e. point of contact, as well as a web socket server to exchange critcal offer/answer session descriptions that are necessary for connection establishment, between the two peers.
Perhaps the simplest proof of concept is this cut'n'paste demo, which let you and a friend exchange the required WebRTC offer/answer session descriptions manually, letting you establish a connection without any server, to see and talk to each other.
That has about a 70% chance of working. If you're both behind symmetric NATs (most mobile networks), then it gets harder still (you'll need a TURN server, which costs money).
I am using WebRTC with Asterisk, and getting an error about 5% of the time where there is no audio due to an error with signaling. The simple fix is if there is no audio coming through, then stop the connection and try again. I know this is a bandaid while I fix the real issue. For now though, I will bandaid my code.
To get the audio I am doing the following:
var remoteStream = new MediaStream();
peerConnection.getReceivers().forEach((receiver) => {
remoteStream.addTrack(receiver.track);
});
callaudio.srcObject = remoteStream;
callaudio.play();
The problem here is that the remote stream always adds a track even when there is no audio coming out of the speakers.
If you inspect chrome://webrtc-internals you can see there is no audio being sent, but there is still a receiver. You can look at the media stream, and see there is indeed an audio track. There is everything supporting the fact that I should be hearing something, but I hear nothing 5% of the time.
My solution is to get the data from the receiver track and check if there is anything coming across there, but I have no Idea how to read that data. I have the web audio API working but it only works if there is some sound currently being played. Sometimes the person on the other end does not talk for 10 seconds. I need a way to read the raw data and see that something is going across that. I just want to know is there ANY data on a MediaStream!
If you do remoteStream.getAudioTracks() you get back an audio track because there is one, there is just no audio going across that track.
In the lastest API, receiver.track is present before a connection is made, even if it goes unused, so you shouldn't infer anything from its presence.
There are at least 5 ways to check when audio reception has been negotiated:
Retroactively: Check receiver.track.muted. Remote tracks are born muted, and receive an unmute event if/once data arrives:
audioReceiver.track.onunmute = () => console.log("Audio data arriving!");
Proactively: Use pc.ontrack. A track event is fired as a result of negotiation, but only for tracks that will receive data. An event for trackEvent.track.kind == "audio" means there will be audio.
Automatic: Use the remote stream provided in trackEvent.streams[0] instead of your own (assuming the other side added one in addTrack). RTCPeerConnection populates this one based on what's negotiated only (no audio track present unless audio is received).
Unified-plan: Check transceiver.direction: "sendrecv" or "recvonly" means you're receiving something; "sendonly" or "inactive" means you're not.
Beyond negotiation: Use getStats() to inspect .packetsReceived of the "inbound-rtp" stats for .kind == "audio" to see if packets are flowing.
The first four are deterministic checks of what's been negotiated. The fifth is only if everything else checks out, but you're still not receiving audio for some reason.
All these work regardless of whether the audio is silent or not, as you requested (your question is really is about what's been negotiated, not about what's audible).
For more on this, check out my blog with working examples in Chrome and Firefox.
Simple hack is:
On first second play back to server 1hz tone.
If server got it on first second, server play back 2hz, if no, play back 1hz.
If client not got 2hz back from server, it restart.
Please note, you should be muted while do that.
I'm programming using PeerJS and, because PeerJS is already very outdated, I'm testing everything on Firefox version 38. I know it is not the best to do, but I don't have time for more. So, I'm trying to do the following:
Peer1 transmits audio and video to Peer2.
Peer2 wants to transmit to Peer3 the video that receives from Peer1 but not the audio. Peer2 wants to send it's own audio.
Basically, Peer3 will receive the video from Peer1 (with Peer2 relaying it) and audio from Peer2, but it will arrive to him all together, like if it was a normal WebRTC call.
I do this like this:
var mixStream = remoteStream;
var audioMixStream = mixStream.getAudioTracks();
mixStream = mixStream.removeStream(audioMixStream);
var mixAudioStream = localStream;
var audioMixAudioStream = mixAudioStream.getAudioTracks();
mixStream.addTrack(audioMixAudioStream);
//Answer the call automatically instead of prompting user.
call.answer(window.audioMixAudioStream);
But I'm getting an error on removeStream. Maybe I will get more errors after that one, but now I'm stuck on this one.
Can someone please tell what I should use instead of removeStream?
P.S.: I already used removeTrack too and got an error too.