getUserMedia() audio - POST to webserver? - javascript

Can you POST data to a URL/webserver after microphone audio capture with getUserMedia()? I have seen examples of how to use GuM, but they always reference "stream" and never do anything with it; I would like to know if I can POST that audio data to a webserver after the recording is stopped.

Have a look to this article : http://www.smartjava.org/content/record-audio-using-webrtc-chrome-and-speech-recognition-websockets

Related

Webrtc handling multiple streams, reduce bandwidth

Hey Folks
I am new on WebRTC and have a question about how to start and stop a WebRTC track to save bandwidth.
My backend provide multiple video streams so my SDK looks like this:
a=ssrc:3671328831 msid:user3339856498#host-602bca webrtctransceiver2
a=ssrc:3671328831 cname:user3339856498#host-602bca
a=mid:video0
a=ssrc:3671267278 msid:user3339856498#host-602bca webrtctransceiver3
a=ssrc:3671267278 cname:user3339856498#host-602bca
a=mid:video1
My goal is to let the user choose which stream they can look,
my problem is that, tracks they are added from RTCPeerConnection already send data.
when I set the track.enabled=false it also send data with full bandwith.
Is there a way to control start and stop they track so that no data is transmittet (RTCRtpReceiver)
thanks

About the WebRTC offer and answer problem

I have 2 devices named A and B
I found that if the Caller(i.e. device A) does not have Webcam, the callee side(i.e. device B) generated answer does not have any video information even the callee side have a webcam.
In the end, the device A can not show the remote video stream.
However, when I swapping the role of these devices, the callee (i.e. device A) generated answer has video information even device A does not have a webcam , Why?
Is it any philosophy behind?
Is there any other way to ensure the callee side can show the remote video stream?
Execute by passing option to the createOffer() method.
pc.createOffer({
offerToReceiveAudio: true,
offerToReceiveVideo: true
});

webRTC how to tell if there is audio

I am using WebRTC with Asterisk, and getting an error about 5% of the time where there is no audio due to an error with signaling. The simple fix is if there is no audio coming through, then stop the connection and try again. I know this is a bandaid while I fix the real issue. For now though, I will bandaid my code.
To get the audio I am doing the following:
var remoteStream = new MediaStream();
peerConnection.getReceivers().forEach((receiver) => {
remoteStream.addTrack(receiver.track);
});
callaudio.srcObject = remoteStream;
callaudio.play();
The problem here is that the remote stream always adds a track even when there is no audio coming out of the speakers.
If you inspect chrome://webrtc-internals you can see there is no audio being sent, but there is still a receiver. You can look at the media stream, and see there is indeed an audio track. There is everything supporting the fact that I should be hearing something, but I hear nothing 5% of the time.
My solution is to get the data from the receiver track and check if there is anything coming across there, but I have no Idea how to read that data. I have the web audio API working but it only works if there is some sound currently being played. Sometimes the person on the other end does not talk for 10 seconds. I need a way to read the raw data and see that something is going across that. I just want to know is there ANY data on a MediaStream!
If you do remoteStream.getAudioTracks() you get back an audio track because there is one, there is just no audio going across that track.
In the lastest API, receiver.track is present before a connection is made, even if it goes unused, so you shouldn't infer anything from its presence.
There are at least 5 ways to check when audio reception has been negotiated:
Retroactively: Check receiver.track.muted. Remote tracks are born muted, and receive an unmute event if/once data arrives:
audioReceiver.track.onunmute = () => console.log("Audio data arriving!");
Proactively: Use pc.ontrack. A track event is fired as a result of negotiation, but only for tracks that will receive data. An event for trackEvent.track.kind == "audio" means there will be audio.
Automatic: Use the remote stream provided in trackEvent.streams[0] instead of your own (assuming the other side added one in addTrack). RTCPeerConnection populates this one based on what's negotiated only (no audio track present unless audio is received).
Unified-plan: Check transceiver.direction: "sendrecv" or "recvonly" means you're receiving something; "sendonly" or "inactive" means you're not.
Beyond negotiation: Use getStats() to inspect .packetsReceived of the "inbound-rtp" stats for .kind == "audio" to see if packets are flowing.
The first four are deterministic checks of what's been negotiated. The fifth is only if everything else checks out, but you're still not receiving audio for some reason.
All these work regardless of whether the audio is silent or not, as you requested (your question is really is about what's been negotiated, not about what's audible).
For more on this, check out my blog with working examples in Chrome and Firefox.
Simple hack is:
On first second play back to server 1hz tone.
If server got it on first second, server play back 2hz, if no, play back 1hz.
If client not got 2hz back from server, it restart.
Please note, you should be muted while do that.

How do I access all videos in a particular playlist with the Youtube API?

I'm working on React site for a musician, and I want to make a page that displays the Youtube videos of his music. He has created a playlist of videos he wants included. I can access the videos individually, but I want to set it up so that whatever videos he adds to that playlist will be accessed by the API and displayed on his site.
For individual videos I'm using Axios to get the videos from this url:
https://www.googleapis.com/youtube/v3/videos?part=snippet&id=<My ID>&key=<My Key>
Then in the render method, I deconstruct 'Videos' off of state, then I map over 'Videos', returning it in an iframe element like so:
<iframe src={https://www.youtube.com/embed/${videos.id}}></iframe>
I've been plyaing with the syntax of my GET url to try to get the playlist, currently I'm getting a status code of 400 on my GET, so I'll need to start at that point. This is what I have right now for URL to get the playlist:
axios.get('https://www.googleapis.com/youtube/v3/search?key=<My Key>&list=<Playlist ID>&part=snippet,id&order=date&maxResults=20')
Can someone give me an idea of how I'm going about this the wrong way?
Rather than playing with URL syntax, you can refer to the YouTube Data API docs to find different kinds of exposed data.
In your case, you're probably looking for managing PlaylistItems. More specifically, the list method:
Returns a collection of playlist items that match the API request parameters.
Here's an example that should get you the id and snippet information for each video in playlist of ID REPLACEWITHYOURID:
axios.get('https://www.googleapis.com/youtube/v3/playlistItems?playlistId=REPLACEWITHYOURID&part=snippet,id&maxResults=20')
Try using the below API endpoint instead. This will return data about a playlist instead of a specific video.
https://www.googleapis.com/youtube/v3/playlistItems?part=snippet&maxResults=25&playlistId=[playlist-id]&key=[your-key]
Use the above URL in your axios.get('') call. Replace [playlist-id] with the list ID of your client and [your-key] with your key.
NOTE:
Also, make sure the YouTube API is enabled with the key that you use. You can check your settings here. If it's not enabled you'll get a 400 error message.
When you look at the data that is returned, look for {resourceId: { videoId: 34rewt6 }} Use that ID and paste it at the end of https://www.youtube.com/watch?v= This will be the direct link to the video.
To test the calls you make to APIs, have a look at Postman. It's a really neat tool to test and look at the data that's returned.

RTCSessionDescription and video/audio broadcast

Once I have exchanged session description between two peers. How can I allow the user to prevent audio and/or video broadcast? Do I need to exchange the Session Descriptions again?
"Broadcast" is probably not the correct term since PeerConnections are always unicast peer-to-peer.
To acquire an audio/video stream from the user's devices you call getUserMedia() and to send these to the other peer you call addStream() on the PeerConnection object.
So to allow the user to not send the acquired stream just let her choose whether to call addStream() or not. E.g. show a popup saying "Send Audio/Video to the other user?". If she chooses "Yes" call addStream() on the PeerConnection object, otherwise just don't call it.
EDIT to answer question in comment:
If you'd like to stop sending of audio and/or video just call removeStream() on the PeerConnection object with the stream to remove as parameter. This will per the API spec trigger a renegotiation.
See http://dev.w3.org/2011/webrtc/editor/webrtc.html#interface-definition for further details.

Categories