Hey Folks
I am new on WebRTC and have a question about how to start and stop a WebRTC track to save bandwidth.
My backend provide multiple video streams so my SDK looks like this:
a=ssrc:3671328831 msid:user3339856498#host-602bca webrtctransceiver2
a=ssrc:3671328831 cname:user3339856498#host-602bca
a=mid:video0
a=ssrc:3671267278 msid:user3339856498#host-602bca webrtctransceiver3
a=ssrc:3671267278 cname:user3339856498#host-602bca
a=mid:video1
My goal is to let the user choose which stream they can look,
my problem is that, tracks they are added from RTCPeerConnection already send data.
when I set the track.enabled=false it also send data with full bandwith.
Is there a way to control start and stop they track so that no data is transmittet (RTCRtpReceiver)
thanks
Related
I am using WebRTC with Asterisk, and getting an error about 5% of the time where there is no audio due to an error with signaling. The simple fix is if there is no audio coming through, then stop the connection and try again. I know this is a bandaid while I fix the real issue. For now though, I will bandaid my code.
To get the audio I am doing the following:
var remoteStream = new MediaStream();
peerConnection.getReceivers().forEach((receiver) => {
remoteStream.addTrack(receiver.track);
});
callaudio.srcObject = remoteStream;
callaudio.play();
The problem here is that the remote stream always adds a track even when there is no audio coming out of the speakers.
If you inspect chrome://webrtc-internals you can see there is no audio being sent, but there is still a receiver. You can look at the media stream, and see there is indeed an audio track. There is everything supporting the fact that I should be hearing something, but I hear nothing 5% of the time.
My solution is to get the data from the receiver track and check if there is anything coming across there, but I have no Idea how to read that data. I have the web audio API working but it only works if there is some sound currently being played. Sometimes the person on the other end does not talk for 10 seconds. I need a way to read the raw data and see that something is going across that. I just want to know is there ANY data on a MediaStream!
If you do remoteStream.getAudioTracks() you get back an audio track because there is one, there is just no audio going across that track.
In the lastest API, receiver.track is present before a connection is made, even if it goes unused, so you shouldn't infer anything from its presence.
There are at least 5 ways to check when audio reception has been negotiated:
Retroactively: Check receiver.track.muted. Remote tracks are born muted, and receive an unmute event if/once data arrives:
audioReceiver.track.onunmute = () => console.log("Audio data arriving!");
Proactively: Use pc.ontrack. A track event is fired as a result of negotiation, but only for tracks that will receive data. An event for trackEvent.track.kind == "audio" means there will be audio.
Automatic: Use the remote stream provided in trackEvent.streams[0] instead of your own (assuming the other side added one in addTrack). RTCPeerConnection populates this one based on what's negotiated only (no audio track present unless audio is received).
Unified-plan: Check transceiver.direction: "sendrecv" or "recvonly" means you're receiving something; "sendonly" or "inactive" means you're not.
Beyond negotiation: Use getStats() to inspect .packetsReceived of the "inbound-rtp" stats for .kind == "audio" to see if packets are flowing.
The first four are deterministic checks of what's been negotiated. The fifth is only if everything else checks out, but you're still not receiving audio for some reason.
All these work regardless of whether the audio is silent or not, as you requested (your question is really is about what's been negotiated, not about what's audible).
For more on this, check out my blog with working examples in Chrome and Firefox.
Simple hack is:
On first second play back to server 1hz tone.
If server got it on first second, server play back 2hz, if no, play back 1hz.
If client not got 2hz back from server, it restart.
Please note, you should be muted while do that.
I'm programming using PeerJS and, because PeerJS is already very outdated, I'm testing everything on Firefox version 38. I know it is not the best to do, but I don't have time for more. So, I'm trying to do the following:
Peer1 transmits audio and video to Peer2.
Peer2 wants to transmit to Peer3 the video that receives from Peer1 but not the audio. Peer2 wants to send it's own audio.
Basically, Peer3 will receive the video from Peer1 (with Peer2 relaying it) and audio from Peer2, but it will arrive to him all together, like if it was a normal WebRTC call.
I do this like this:
var mixStream = remoteStream;
var audioMixStream = mixStream.getAudioTracks();
mixStream = mixStream.removeStream(audioMixStream);
var mixAudioStream = localStream;
var audioMixAudioStream = mixAudioStream.getAudioTracks();
mixStream.addTrack(audioMixAudioStream);
//Answer the call automatically instead of prompting user.
call.answer(window.audioMixAudioStream);
But I'm getting an error on removeStream. Maybe I will get more errors after that one, but now I'm stuck on this one.
Can someone please tell what I should use instead of removeStream?
P.S.: I already used removeTrack too and got an error too.
I am currently developing a game using NodeJS + SocketIO but is having problem with the amount of data being sent. The server currently sends about 600-800 kbps which is not good at all.
Here are my classes:
Shape
Pentagon
Square
Triangle
Entity
Player
Bullet
Every frame (60 fps), I update each of the classes and each class will have an updatePack that will be sent to the client. The updatePack is pretty simple, it only containts the object's id and coords.
At first, I thought everyone's game are like that (silly me). I looked into several simple games like agar.io, slither.io, diep.io, and rainingchain.com and found that they use < 100 kbps which made me realize that I am sending too much data.
Then I looked into compressing the data being sent. But then I found out that data are automatically compressed when sending in Socket.io
Here is how I send my data:
for(var i in list_containing_all_of_my_sockets){
var socket = list_containing_all_of_my_sockets[i];
data = set_data_function();
socket.emit('some message', data);
}
How can I make it send less data? Is there something that I missed?
Opinionated answer, considering a way games handle server-client traffic. This is not the answer:
Rendering is a presentation concern. Your server, which is the single source of truth about the game state, should only care about changing and advertising the game state. 60fps rendering is not a server concern and therefore the server shouldn't be sending 60 updates per second for all moving objects (you might as well be better of just rendering the whole thing on the server and sending it over as a video stream).
The client should know about the game state and know how to render the changing game state at 60fps. The server should only send either state changes or events that change state to the client. In the latter case the client would know how to apply those events to change the state in tandem with the state known to the server.
For example, the server could be just sending the updated movement vectors (or even the acting forces) for each object and the client could be calculating the coordinates of each object based on their currently known movement vectors + the elapsed time.
Maybe its better not to send data every frame, but instead send it only on some particular events (etc. collisions,deaths,spawns)
Whenever a message is send over a network it not only contains the actual data you want to send but also a lot of additional data for routing, error prevention and other stuff.
Since you're sending all your data in individual messages, you'll create these additional information for every single one of them.
So instead you should gather all data you need to send, save it into one object and send this one in a single message.
You could use arrays instead of objects to cut down some size (by omitting the keys). It will be harder to use the data later, but... you've got to make compromises.
Also, by the looks of it, Socket.IO can compress data too, so you can utilize that.
As for updating, I've made a few games using Node.js and Socket.IO. The process is the following:
Player with socket id player1 sends his data (let's say coordinates {x:5, y:5})
Server receives the data and saves it in an object containing all players' data:
{
"player1": {x:5, y:5},
"player2": ...
...
}
Server sends that object to player1.
player1 receives the object and uses the data to visualize the other players.
Basically, a player receives data only after he has sent his own. This way, if his browser crashes, you don't bombard him with data. Otherwise, if you've sent 15 updates while the user's browser hanged, he needs more time to process these 15 updates. During that time, you send even more updates, so the browser needs even more time to process them. This snowballs into a disaster.
When a player receives data only after sending his own, you ensure that:
The sending player gets the other players' data immediately, meaning that he doesn't wait for the server's 60 times-per-second update.
If the player's browser crashes, he no longer sends data and therefore no longer receives it since he can't visualize it anyway.
Once I have exchanged session description between two peers. How can I allow the user to prevent audio and/or video broadcast? Do I need to exchange the Session Descriptions again?
"Broadcast" is probably not the correct term since PeerConnections are always unicast peer-to-peer.
To acquire an audio/video stream from the user's devices you call getUserMedia() and to send these to the other peer you call addStream() on the PeerConnection object.
So to allow the user to not send the acquired stream just let her choose whether to call addStream() or not. E.g. show a popup saying "Send Audio/Video to the other user?". If she chooses "Yes" call addStream() on the PeerConnection object, otherwise just don't call it.
EDIT to answer question in comment:
If you'd like to stop sending of audio and/or video just call removeStream() on the PeerConnection object with the stream to remove as parameter. This will per the API spec trigger a renegotiation.
See http://dev.w3.org/2011/webrtc/editor/webrtc.html#interface-definition for further details.
Can you POST data to a URL/webserver after microphone audio capture with getUserMedia()? I have seen examples of how to use GuM, but they always reference "stream" and never do anything with it; I would like to know if I can POST that audio data to a webserver after the recording is stopped.
Have a look to this article : http://www.smartjava.org/content/record-audio-using-webrtc-chrome-and-speech-recognition-websockets