I'm experimenting with WebRTC between two browsers using RTCPeerConnection and my own long-polling implementation. I've created demo application, which successfully works with Mozilla Nightly (22), however in Chrome (25), I can't get no remote video and only "empty black video" appears. Is there something wrong in my JS code?
Function sendMessage(message) sends message to server via long-polling and on the other side, it is accepted using onMessage()
var peerConnection;
var peerConnection_config = {"iceServers": [{"url": "stun:23.21.150.121"}]};
// when message from server is received
function onMessage(evt) {
if (!peerConnection)
call(false);
var signal = JSON.parse(evt);
if (signal.sdp) {
peerConnection.setRemoteDescription(new RTCSessionDescription(signal.sdp));
} else {
peerConnection.addIceCandidate(new RTCIceCandidate(signal.candidate));
}
}
function call(isCaller) {
peerConnection = new RTCPeerConnection(peerConnection_config);
// send any ice candidates to the other peer
peerConnection.onicecandidate = function(evt) {
sendMessage(JSON.stringify({"candidate": evt.candidate}));
};
// once remote stream arrives, show it in the remote video element
peerConnection.onaddstream = function(evt) {
// attach media stream to local video - WebRTC Wrapper
attachMediaStream($("#remote-video").get("0"), evt.stream);
};
// get the local stream, show it in the local video element and send it
getUserMedia({"audio": true, "video": true}, function(stream) {
// attach media stream to local video - WebRTC Wrapper
attachMediaStream($("#local-video").get("0"), stream);
$("#local-video").get(0).muted = true;
peerConnection.addStream(stream);
if (isCaller)
peerConnection.createOffer(gotDescription);
else {
peerConnection.createAnswer(gotDescription);
}
function gotDescription(desc) {
sendMessage(JSON.stringify({"sdp": desc}));
peerConnection.setLocalDescription(desc);
}
}, function() {
});
}
My best guess is that there is a problem with your STUN server configuration. To determine if this is the issue, try using google's public stun server stun:stun.l.google.com:19302 (which won't work in Firefox, but should definitely work in Chrome) or test on a local network with no STUN server configured.
Also, verify that your ice candidates are being delivered properly. Firefox doesn't actually generate 'icecandidate' events (it includes the candidates in the offer/answer), so an issue with delivering candidate messages could also explain the discrepancy.
Make sure your video tag attribute autoplay is set to 'autoplay'.
Related
I have tried a couple of solutions already, but nothing works for me.
I want to stream audio from my PC to another computer with almost zero latency. Things are working fine so far in a sense of lagging and everything, sound is clear and not choppy at all, but there is something like a delay between the moment when audio starts playing on my PC and remote PC. For example when I click on Youtube 'play' button audio starts playing only after 3-4 seconds on the remote machine. The same when I click 'Pause', the sound on the remote PC stops after a couple of seconds.
I've tried to use websockets\plain audio tag, but no luck so far.
For example this is my solution by using websockets and pipes:
def create_pipe():
return win32pipe.CreateNamedPipe(r'\\.\pipe\__audio_ffmpeg', win32pipe.PIPE_ACCESS_INBOUND,
win32pipe.PIPE_TYPE_MESSAGE |
win32pipe.PIPE_READMODE_MESSAGE |
win32pipe.PIPE_WAIT, 1, 1024 * 8, 1024 * 8, 0, None)
async def echo(websocket):
pipe = create_pipe()
win32pipe.ConnectNamedPipe(pipe, None)
while True:
data = win32file.ReadFile(pipe, 1024 * 2)
await websocket.send(data[1])
async def main():
async with websockets.serve(echo, "0.0.0.0", 7777):
await asyncio.Future() # run forever
if __name__ == '__main__':
asyncio.run(main())
The way I start ffmpeg
.\ffmpeg.exe -f dshow -i audio="Stereo Mix (Realtek High Definition Audio)" -acodec libmp3lame -ab 320k -f mp3 -probesize 32 -muxdelay 0.01 -y \\.\pipe\__audio_ffmpeg
On the JS side the code is a little bit long, but essentially I am just reading a web socket and appending to buffer
this.buffer = this.mediaSource.addSourceBuffer('audio/mpeg')
Also as you see I tried to use -probesize 32 -muxdelay 0.01 flags, but no luck as well
I tried to use plain tag as well, but still - this couple-of-seconds delay exists
What can I do? Am I missing something? Maybe I have to disable buffering somewhere?
I have some code, but all I learned was from https://webrtc.github.io/samples/ website and some from MDN. It's pretty simple.
The idea is to connect 2 peers using a negotiating server just for the initial connection. Afterwards they can share streams (audio, video, data). When I say peers I mean client computers like browsers.
So here's an example for connecting, and broadcasting and of course receiving.
Now for some of my code.
a sketch of the process
note: the same code is used for connecting to and connecting from. this is how my app works bcz it's kind of like a chat. ClientOutgoingMessages and ClientIncomingMessages are just my wrapper around sending messages to server (I use websockets, but it's possible also ajax).
Start: peer initiates RTCPeerConnection and sends an offer via server. also setup events for receiving. The other peer is notified of the offer by the server, then sends answer the same way (should he choose to) and finally the original peer accepts the answer and starts streaming. Among this there is another event about candidate I didn't even bothered to know what it is. It works without knowing it.
function create_pc(peer_id) {
var pc = new RTCPeerConnection(configuration);
var sender
var localStream = MyStreamer.get_dummy_stream();
for (var track of localStream.getTracks()) {
sender = pc.addTrack(track, localStream);
}
// when a remote user adds stream to the peer connection, we display it
pc.ontrack = function (e) {
console.log("got a remote stream")
remoteVideo.style.visibility = 'visible'
remoteVideo.srcObject = e.streams[0]
};
// Setup ice handling
pc.onicecandidate = function (ev) {
if (ev.candidate) {
ClientOutgoingMessages.candidate(peer_id, ev.candidate);
}
};
// status
pc.oniceconnectionstatechange = function (ev) {
var state = pc.iceConnectionState;
console.log("oniceconnectionstatechange: " + state)
};
MyRTC.set_pc(peer_id, {
pc: pc,
sender: sender
});
return pc;
}
function offer_someone(peer_id, peer_name) {
var pc = MyRTC.create_pc(peer_id)
pc.createOffer().then(function (offer) {
ClientOutgoingMessages.offer(peer_id, offer);
pc.setLocalDescription(offer);
});
}
function answer_offer(peer_id) {
var pc = MyRTC.create_pc(peer_id)
var offer = MyOpponents.get_offer(peer_id)
pc.setRemoteDescription(new RTCSessionDescription(offer));
pc.createAnswer().then(function (answer) {
pc.setLocalDescription(answer);
ClientOutgoingMessages.answer(peer_id, answer);
// alert ("rtc established!")
MyStreamer.stream_current();
});
}
handling messages from server
offer: function offer(data) {
if (MyRTC.get_pc(data.connectedUser)) {
// alert("Not accepting offers already have a conn to " + data.connectedUser)
// return;
}
MyOpponents.set_offer(data.connectedUser, data.offer)
},
answer: function answer(data) {
var opc = MyRTC.get_pc(data.connectedUser)
opc && opc.pc.setRemoteDescription(new RTCSessionDescription(data.answer)).catch(function (err) {
console.error(err)
// alert (err)
});
// alert ("rtc established!")
MyStreamer.stream_current();
},
candidate: function candidate(data) {
var opc = MyRTC.get_pc(data.connectedUser)
opc && opc.pc.addIceCandidate(new RTCIceCandidate(data.candidate));
},
leave: function leave(data) {
MyRTC.close_pc(data.connectedUser);
},
Iam making a web app using webrtc that allows two users to communicate with each other using both video and audio. The app uses node.js as signaling server. The app works fine when communicating between two desktops but when I try a desktop to mobile communication, if the user initiating the offer is the one in the desktop, the one on mobile can't hear any sound. If it happens the other way around, both have audio. When I check the devtools the audio stream is sent from the desktop and is received by the mobile (it is active and not muted) but there is no sound. I use the audio element to play the audio stream and the video element to play the video stream. I have tested this on both chrome and mozilla and i encounter the same problem.
If anyone can help it would be greatly appreciated.
Bellow are code samples of the ontrack event
rtcConnection.ontrack = function(event) {
console.log('Remote stream received.');
if(event.streams[0].getAudioTracks().length > 0) {
event.streams[0].getAudioTracks().forEach((track) => {
remoteAudioStream .addTrack(track);
});
audioPlayer.srcObject = remoteAudioStream;
}
if (event.streams[0].getVideoTracks().length > 0){
event.streams[0].getVideoTracks().forEach((track) => {
remoteVideoStream .addTrack(track);
});
localVideo.srcObject = remoteVideoStream;
}
};
and the capture media stream:
function getUserMedia() {
let getAudio = true;
let getVideo = true;
let constraints = { audio: getAudio, video: getVideo };
navigator.mediaDevices.getUserMedia(constraints) // Ask user to allow access to his media devices
.then(
function(data) { //if yes, get stream config data and join room
localStream = data;
console.log('Getting user media succeeded.');
console.log('RTC Connection created. Getting user media. Adding stream tracks to RTC connection');
sendMessage({ type: 'peermessage', messagetype:'info', messagetext: 'Peer started video streaming.'});
//stream to be sent to the other user
localStream.getTracks().forEach(track => rtcConnection.addTrack(track, localStream));
console.log('Creating offer');
rtcConnection.createOffer()
.then(function(offer) { // createOffer success
console.log('Offer created. Setting it as local description');
return rtcConnection.setLocalDescription(offer);
}, logError) // createOffer error
.then(function() { // setLocalDescription success
console.log('Offer set as local description. Sending it to agent');
sendMessage(rtcConnection.localDescription)
}, logError); // setLocalDescription error
}
);
}
I have a video call application based on WebRTC. It is working as expected. However when call is going on, if I disconnect and connect back audio device (mic + speaker), only speaker part is working. The mic part seems to be not working - the other side can't hear anymore.
Is there any way to inform WebRTC to take audio input again once audio device is connected back?
Is there any way to inform WebRTC to take audio input again once audio device is connected back?
Your question appears simple—the symmetry with speakers is alluring—but once we're dealing with users who have multiple cameras and microphones, it's not that simple: If your user disconnects their bluetooth headset they were using, should you wait for them to reconnect it, or immediately switch to their laptop microphone? If the latter, do you switch back if they reconnect it later? These are application decisions.
The APIs to handle these things are: primarily the ended and devicechange events, and the replaceTrack() method. You may also need the deviceId constraint, and the enumerateDevices() method to a handle multiple devices.
However, to keep things simple, let's take the assumptions in your question at face value to explore the APIs:
When the user unplugs their sole microphone (not their camera) mid-call, our job is to resume conversation with it when they reinsert it, without dropping video:
First, we listen to the ended event to learn when our local audio track drops.
When that happens, we listen for a devicechange event to detect re-insertion (of anything).
When that happens, we could check what changed using enumerateDevices(), or simply try getUserMedia again (microphone only this time).
If that succeeds, use await sender.replaceTrack(newAudioTrack) to send our new audio.
This might look like this:
let sender;
(async () => {
try {
const stream = await navigator.mediaDevices.getUserMedia({video: true, audio: true});
pc.addTrack(stream.getVideoTracks()[0], stream);
sender = pc.addTrack(stream.getAudioTracks()[0], stream);
sender.track.onended = () => navigator.mediaDevices.ondevicechange = tryAgain;
} catch (e) {
console.log(e);
}
})();
async function tryAgain() {
try {
const stream = await navigator.mediaDevices.getUserMedia({audio: true});
await sender.replaceTrack(stream.getAudioTracks()[0]);
navigator.mediaDevices.ondevicechange = null;
sender.track.onended = () => navigator.mediaDevices.ondevicechange = tryAgain;
} catch (e) {
if (e.name == "NotFoundError") return;
console.log(e);
}
}
// Your usual WebRTC negotiation code goes here
The above is for illustration only. I'm sure there are lots of corner cases to consider.
I have built a simple streaming service using WebRTC. I'm currently still running everything through localhost. Everything currently works when using the Chrome browser, but I can not connect when I utilize Firefox. I am using the WebRTC-Adapter shim.
The problem seems to stem from peerConnection.localDescription always being equal to null, and being unable to send my localDescription to the peer, or set the remoteDescription correctly.
Here is a snippet of my code. This only covers the recipient of the stream, who is initiating the p2p connection. The streamer already has a local stream set up, and sets their own local and remote description, and the localDescription is then sent to the recipient. sendRecipientDescription() just handles sending the sdp to the streamer via sockets. PC_Config just includes a STUN server:
setUpRecipient = () => {
this.createPeerConnection();
this.pc
.createOffer({ offerToReceiveVideo: true })
.then(offer => {
this.pc.setLocalDescription(offer);
})
.then(() => {
this.sendRecipientDescription();
console.log('recipient local description ', this.pc.localDescription);
})
.catch(e => {
console.log('error recipient set up ', e);
});
};
createPeerConnection = () => {
try {
this.pc = new RTCPeerConnection(PC_CONFIG);
this.pc.onicecandidate = this.handleIceCandidate;
this.pc.ontrack = this.handleRemoteStreamAdded;
this.pc.onremovetrack = this.handleRemoteStreamRemoved;
this.pc.oniceconnectionstatechange = this.handleIceStateChange;
console.log('Created RTCPeerConnection', this.pc.localDescription);
} catch (e) {
console.log('Failed to create PeerConnection, exception: ', e.message);
}
};
When using the Chrome browser, this.pc.localDescription returns as would be expected. When using the Firefox browser, this.pc.localDescription is always null, there is no RTCSessionDescription at all. When I console.log(this.pc) after setLocalDescription, it appears as though localDescription is indeed null: RTCPeerConnection un-expanded
However, when I expand the RTCPeerConnection object, you see that localDescription appears to be set up correctly: RTCPeerConnection expanded. But, when I try to send this.pc.localDescription, it only sends null.
I found an answer to my own question. Apparently I needed to return this.pc.setLocalDescription();
I don't know why this is necessary. As far as I know, pc.setLocalDescription does not return anything, and only has the side effect of setting pc.localDescription. It worked perfectly fine in Chrome, but not in Firefox.
So far I have successfully established (running node.js server) an RTC connection between two peers with a datachannel. I can send data back and forth.
I have also successfully streamed the webcam from one peer to another and vice versa.
How exactly am I doing this?
Both peers do this:
function handleRemoteStreamAdded(event) {
console.log('Remote stream added.');
remoteStream = event.stream;
remoteVideo.srcObject = remoteStream;
}
function gotStream(stream){
...
pc.addStream(stream);
...
}
navigator.mediaDevices.getUserMedia(constraints).then(gotStream).catch(err);
...
pc = new RTCPeerConnection();
...
pc.onaddstream = handleRemoteStreamAdded;
So I basically say that whenever I add my own stream (pc.addStream) then go to handleRemoteStreamAdded. It all works fine.
But what I really want to do as a next step is to add a button to each client and give each of them the option whether or not they want to stream their cam to the other side. If they want to, then the stream should start automatically on the other end. Unfortunately, I just can't figure out how to.
Theoretically, what I thought is to add an Eventlistener to a button and then the event triggers:
navigator.mediaDevices.getUserMedia(constraints).then(gotStream).catch(err);
By doing this I basically also did pc.addStream(stream); via function gotStream. Then I send a message to the other end like "display my cam" and by receiving this message on the other end, that other peer should somehow trigger handleRemoteStreamAdded. But within this function there is the pre-defined event that I can only "access" locally via pc.onaddstream = handleRemoteStreamAdded;
How can I automatically start streaming the other side's cam as soon as I either get a message like "display my cam" or by some event?
are you creating another offer and doing a new signaling exchange after calling pc.addStream? (which fwiw is deprecated; prefer addTrack and ontrack)
See https://webrtc.github.io/samples/src/content/peerconnection/upgrade/ for a similar thing adding video to an audio-only call.