I have 2 devices named A and B
I found that if the Caller(i.e. device A) does not have Webcam, the callee side(i.e. device B) generated answer does not have any video information even the callee side have a webcam.
In the end, the device A can not show the remote video stream.
However, when I swapping the role of these devices, the callee (i.e. device A) generated answer has video information even device A does not have a webcam , Why?
Is it any philosophy behind?
Is there any other way to ensure the callee side can show the remote video stream?
Execute by passing option to the createOffer() method.
pc.createOffer({
offerToReceiveAudio: true,
offerToReceiveVideo: true
});
Related
Sorry if the question is very stupid
I am trying to show the stream from a person to another user using js
I have tried putting it in a cookie but it doesnt work even then.Even when the object in video is the same as the other
File 1
var video = document.querySelector("#videoElement");
var x=document.cookie
console.log(x)
video.srcObject = x;
File 2
var video = document.querySelector("#videoElement");
if (navigator.mediaDevices.getUserMedia) {
navigator.mediaDevices.getUserMedia({ video: true })
.then(function (stream) {
video.srcObject = stream;
document.cookie=video.srcObject
console.log(stream,video.srcObject)
})
.catch(function (err0r) {
console.log("Something went wrong!");
});
console.log(stream,video.srcObject)
}
I would like to just for now show it on two pages but for future what language should i use to store the video if you know you can share it
A cookie is not a centralized universal camera access repository on the web. Thank goodness.
A MediaStream is a local resource object representing active camera use, not a shareable URL.
This object lives solely in your local JS page, and isn't addressable on the web.
Since it doesn't live on any server, transporting the graphical bits from your camera to a friend's system, requires quite a bit of heavy lifting. This includes establishing an RTCPeerConnection which is the domain of WebRTC:
navigator.mediaDevices.getUserMedia({ video: true })
.then(function (stream) {
const iceServers = [{urls: "stun:stun.l.google.com:19302"}];
const pc = new RTCPeerConnection({iceServers});
for (const track of stream.getTracks())
pc.addTrack(track, stream);
/* lots of other WebRTC negotiation code */
You'll also typically need a server of some kind, both to solve discovery, i.e. point of contact, as well as a web socket server to exchange critcal offer/answer session descriptions that are necessary for connection establishment, between the two peers.
Perhaps the simplest proof of concept is this cut'n'paste demo, which let you and a friend exchange the required WebRTC offer/answer session descriptions manually, letting you establish a connection without any server, to see and talk to each other.
That has about a 70% chance of working. If you're both behind symmetric NATs (most mobile networks), then it gets harder still (you'll need a TURN server, which costs money).
I am using WebRTC with Asterisk, and getting an error about 5% of the time where there is no audio due to an error with signaling. The simple fix is if there is no audio coming through, then stop the connection and try again. I know this is a bandaid while I fix the real issue. For now though, I will bandaid my code.
To get the audio I am doing the following:
var remoteStream = new MediaStream();
peerConnection.getReceivers().forEach((receiver) => {
remoteStream.addTrack(receiver.track);
});
callaudio.srcObject = remoteStream;
callaudio.play();
The problem here is that the remote stream always adds a track even when there is no audio coming out of the speakers.
If you inspect chrome://webrtc-internals you can see there is no audio being sent, but there is still a receiver. You can look at the media stream, and see there is indeed an audio track. There is everything supporting the fact that I should be hearing something, but I hear nothing 5% of the time.
My solution is to get the data from the receiver track and check if there is anything coming across there, but I have no Idea how to read that data. I have the web audio API working but it only works if there is some sound currently being played. Sometimes the person on the other end does not talk for 10 seconds. I need a way to read the raw data and see that something is going across that. I just want to know is there ANY data on a MediaStream!
If you do remoteStream.getAudioTracks() you get back an audio track because there is one, there is just no audio going across that track.
In the lastest API, receiver.track is present before a connection is made, even if it goes unused, so you shouldn't infer anything from its presence.
There are at least 5 ways to check when audio reception has been negotiated:
Retroactively: Check receiver.track.muted. Remote tracks are born muted, and receive an unmute event if/once data arrives:
audioReceiver.track.onunmute = () => console.log("Audio data arriving!");
Proactively: Use pc.ontrack. A track event is fired as a result of negotiation, but only for tracks that will receive data. An event for trackEvent.track.kind == "audio" means there will be audio.
Automatic: Use the remote stream provided in trackEvent.streams[0] instead of your own (assuming the other side added one in addTrack). RTCPeerConnection populates this one based on what's negotiated only (no audio track present unless audio is received).
Unified-plan: Check transceiver.direction: "sendrecv" or "recvonly" means you're receiving something; "sendonly" or "inactive" means you're not.
Beyond negotiation: Use getStats() to inspect .packetsReceived of the "inbound-rtp" stats for .kind == "audio" to see if packets are flowing.
The first four are deterministic checks of what's been negotiated. The fifth is only if everything else checks out, but you're still not receiving audio for some reason.
All these work regardless of whether the audio is silent or not, as you requested (your question is really is about what's been negotiated, not about what's audible).
For more on this, check out my blog with working examples in Chrome and Firefox.
Simple hack is:
On first second play back to server 1hz tone.
If server got it on first second, server play back 2hz, if no, play back 1hz.
If client not got 2hz back from server, it restart.
Please note, you should be muted while do that.
I stumbled on a weird issue in a WebRTC webapp. Here's the setup:
Client A and client B send audio via a send-only WebRTC connections to a SFU.
Client C receives via two receive-only connections to that same SFU the audio streams from client A and B and adds them to two different "audio" elements. The routing between these send and receive connections work properly.
Here's the problem:
On refreshing the page, sometimes client C hears audio from both client A and B. But most of the time client C only hears audio from randomly A or B.
It's happening in both firefox and chrome.
Both connections are receiving data (see graph "bitsReceivedPerSecond") but only one connection is outputting audio. Here an example where C could hear A but not B:
Connection Client A -> C:
Connection Client B -> C:
My understanding of these graphs is that the raw WebRTC connection is working fine (data is sent and received) but somehow a connection does not output audio randomly.
Does anyone have a clue how this can happen?
Here is the "ontrack" callback for adding the streams to the audio elements. The Logs do appear correctly for each connection.
gotRemoteStream(e) {
Logger.log("Remote Streams: #"+e.streams.length);
if (this.audioElement.srcObject !== e.streams[0]) {
Logger.log("Received remote Stream with tracks to audio: " + this.audioElement.id);
this.audioElement.srcObject = e.streams[0];
}
}
A single audio element can only play one audio track at a time.
You said there were two incoming audio tracks, so if this.audioElement is the same element, then each call to gotRemoteStream will race setting srcObject, one overwriting the other.
This is most likely why you only hear one or the other.
The simplest solution might be to key off the stream associations sent, since they'll tend to differ:
const elements = {};
gotRemoteStream({streams: [stream]}) {
if (!elements[stream.id]) {
elements[stream.id] = this.audioElementA.srcObject ? this.audioElementB
: this.audioElementA;
}
elements[stream.id].srcObject = stream;
}
This should work for two incoming connections. More than that is left as an exercise.
Once I have exchanged session description between two peers. How can I allow the user to prevent audio and/or video broadcast? Do I need to exchange the Session Descriptions again?
"Broadcast" is probably not the correct term since PeerConnections are always unicast peer-to-peer.
To acquire an audio/video stream from the user's devices you call getUserMedia() and to send these to the other peer you call addStream() on the PeerConnection object.
So to allow the user to not send the acquired stream just let her choose whether to call addStream() or not. E.g. show a popup saying "Send Audio/Video to the other user?". If she chooses "Yes" call addStream() on the PeerConnection object, otherwise just don't call it.
EDIT to answer question in comment:
If you'd like to stop sending of audio and/or video just call removeStream() on the PeerConnection object with the stream to remove as parameter. This will per the API spec trigger a renegotiation.
See http://dev.w3.org/2011/webrtc/editor/webrtc.html#interface-definition for further details.
Can you POST data to a URL/webserver after microphone audio capture with getUserMedia()? I have seen examples of how to use GuM, but they always reference "stream" and never do anything with it; I would like to know if I can POST that audio data to a webserver after the recording is stopped.
Have a look to this article : http://www.smartjava.org/content/record-audio-using-webrtc-chrome-and-speech-recognition-websockets