Since RTCPeerConnection.addStream() is deprecated, how does one now add a media stream such as video to a peerConnection?
Here is the video stream capture function that I am currently using. I wanted to see if I am getting a media stream back from my camera and indeed I am. But then how do I add the stream to a peer connection?
var constraints = {
audio: false,
video: true
};
function successCallback(stream) {
window.stream = stream; // stream available to console
if (window.URL) {
document.getElementById('localVid').src = window.URL.createObjectURL(stream);
} else {
document.getElementById('localVid').src = stream;
}
peerConn.addStream(s); //deprecated so onaddstream is not fired
}
function errorCallback(error) {
console.log('navigator.getUserMedia error: ', error);
}
I have successfully set up a signaling system for data channels and now I am moving on to streaming, however the limited resources are frustrating and I need some help.
Mozilla decided to deprecate addStream without Chrome implementing the new alternative addTrack yet. addStream is still supported and I doubt it is going to be removed while it remains the one and only method available in Chrome.
Note that onaddstream is fired if a remote stream to a peerconnection, not if a local stream is added.
Related
I have working on streaming live video using WebRTC based on RTCConnection with library called simple-peer, but I have faced with some lag between live stream video (with MediaRecorder) and that was played on using MediaSource
Here is recorder:
var mediaRecorder = new MediaRecorder(stream, options);
mediaRecorder.ondataavailable = handleDataAvailable;
function handleDataAvailable(event) {
if (connected && event.data.size > 0) {
peer.send(event.data);
}
}
...
peer.on('connect', () => {
// wait for 'connect' event before using the data channel
mediaRecorder.start(1);
});
Here is source that is played:
var mediaSource = new MediaSource();
var sourceBuffer;
mediaSource.addEventListener('sourceopen', args => {
sourceBuffer = mediaSource.addSourceBuffer(mimeCodec);
});
...
peer.on('data', data => {
// got a data channel message
sourceBuffer.appendBuffer(data);
});
I open two tabs and connect to myself and I see delay in playing video ...
Seems like I configured badly MediaRecorder or MediaSource
Any help will be appreciated ;)
You've combined two completely unrelated techniques for streaming the video, and are getting the worst tradeoffs of both. :-)
WebRTC has media stream handling built into it. If you expect realtime video, the WebRTC stack is what you want to use. It handles codec negotiation, auto-scales bandwidth, frame size, frame rate, and encoding parameters to match network conditions, and will outright drop chunks of time to keep playback as realtime as possible.
On the other hand, if retaining quality is more desirable than being realtime, MediaRecorder is what you would use. It makes no adjustments based on network conditions because it is unaware of those conditions. MediaRecorder doesn't know or care where you put the data after it gives you the buffers.
If you try to play back video as it's being recorded, will inevitably lag further and further behind because there is no built-in catch-up method. The only thing that can happen is a buffer underrun, where the playback side waits until there is enough data to begin playback again. Even if it becomes minutes behind, it isn't going to automatically skip ahead.
The solution is to use the right tool. It sounds like from your question that you want realtime video. Therefore, you need to use WebRTC. Fortunately simple-peer makes this... simple.
On the recording side:
const peer = new Peer({
initiator: true,
stream
});
Then on the playback side:
peer.on('stream', (stream) => {
videoEl.srcObject = stream;
});
Much simpler. The WebRTC stack handles everything for you.
So I am having a slight issue with WebRTC and video streaming. I am setting the stream to a video object like so:
let videoElement = document.createElement('video');
video.muted = true;
video.srcObject = event.streams[0];
video.onloadedmetadata = function (e) {
//Is Never Executed
var playPromise = video.play();
}
The event.streams is from a remote client. It works very well when two clients are in the same network, ie on a router in a home, or they are in the same time town under different networks. But it fails when trying to connect to a client that is across the country and the onloadedmetadata is never called.
What could be causing it work with connecting two clients that are near each but fail with distance?
I'm developing a website where the user can send audio commands which are captured with getUserMedia (only audio) and interpreted in the backend with a Speech-to-Text service. In order to keep the latency as low as possible, I'm sending small audio chunks to my server. This is working just fine on Chrome/Firefox and even Edge. However, I'm struggling with iOS Safari. I know that Safari is my only choice on Apple devices because of the missing WebRTC support on iOS Chrome/Firefox.
The problem is that I normally get the user's voice a couple of times (for some commands). But without any pattern the stream then suddenly contains only empty bytes. I tried a lot of different strategies but in general I stuck to the following plan:
After user clicks a button, call getUserMedia (with audio constraint) and save stream to a variable
Create AudioContext (incl. Gain, MediaStreamSource, ScriptProcess) and connect the audio stream to the MediaStreamSource
Register an event listener to the ScriptProcessor and send audio chunks in callback to the server
When a result is returned from the server close AudioContext and audio's MediaStream
The interesting part is now what happens after a subsequent user command. I tried various things: Call getUserMedia again for each call and close the MediaStream track each time, use the initially created MediaStream and reconnect the EventHandler every time, close the AudioContext after every call, use only one initially created AudioContext... All my attempts failed so far, because I either got empty bytes from the Stream or the AudioContext was created in a "suspended" state. Only closing MediaStream/AudioContext and creating it every time again seems to be more stable, but fetching the MediaStream with getUserMedia takes quite a while on iOS (~1,5-2s), which gives a bad user experience.
I'll show you my latest attempt where I tried to mute/disable the stream in between user commands and keep the AudioContext open:
var audioStream: MediaStream;
var audioContext: AudioContext;
var startButton = document.getElementById("startButton");
startButton.onclick = () => {
if (!audioStream) {
getUserAudioStream();
} else {
// mute/disable stream
audioStream.getAudioTracks()[0].enabled = true;
}
}
var stopButton = document.getElementById("stopButton");
stopButton.onclick = () => {
// unmute/enable stream
audioStream.getAudioTracks()[0].enabled = false;
}
function getUserAudioStream(): Promise<any> {
return navigator.mediaDevices.getUserMedia({
audio: true
} as MediaTrackConstraints,
}).then((stream: MediaStream) => {
audioStream = stream;
startRecording();
}).catch((e) => { ... });
}
const startRecording = () => {
const ctx = (window as any).AudioContext || (window as any).webkitAudioContext;
if (!ctx) {
console.error("No Audio Context available in browser.");
return;
} else {
audioContext = new ctx();
}
const inputPoint = audioContext.createGain();
const microphone = audioContext.createMediaStreamSource(audioStream);
scriptProcessor = inputPoint.context.createScriptProcessor(4096, 1, 1);
microphone.connect(inputPoint);
inputPoint.connect(scriptProcessor);
scriptProcessor.connect(inputPoint.context.destination);
scriptProcessor.addEventListener("audioprocess", streamCallback);
};
const streamCallback = (e) => {
const samples = e.inputBuffer.getChannelData(0);
// Here I stream audio chunks to the server and
// observe that buffer sometimes only contains empty bytes...
}
I hope the snippet makes sense to you, because I let some stuff out to keep it readable. I think I made clear that this is only one of many attempts and actually my question is: Is there some kind of special characteristic in WebRTC/getUserMedia on iOS that I missed so far? Why does iOS treat MediaStream differently than Chrome/Firefox on Windows? As a last comment: I know that the ScriptProcessorNode is no longer recommended. Actually, I'd like to use MediaRecorder for that but this is also not yet supported on iOS. Also, the polyfill I know is not really suitable because it only support ogg for streaming audio and which also leads to problems because I would need to set the sample rate for that to a fixed value.
I see a lot of questions for how to record audio then stop recording, then play audio or save it to a file, but none of this is what I want.
tl;dr Here's my question in a nutshell: "How can I immediately play audio recorded from the user's microphone?" That is, I don't want to save a recording and play it when the user hits a "Play" button, I don't want to save a recording to a file on the user's computer and I don't want to use WebRTC to stream audio anywhere. I just want to talk into my microphone and hear my voice come out the speakers.
All I'm trying to do is make a very simple "echo" page that just immediately plays back audio recorded from the mic. I started using a mediaRecorder object, but that wasn't working and from what I can tell that's meant for recording full audio files, so I switched to an AudioContext-based approach.
A very simple page would just look like this:
<!DOCTYPE html>
<head>
<script type="text/javascript" src="mcve.js"></script>
</head>
<body>
<audio id="speaker" volume="1.0"></audio>
</body>
and the script looks like this:
if (navigator.mediaDevices) {
var constrains = {audio: true};
navigator.mediaDevices.getUserMedia(constrains).then(
function (stream) {
var context = new AudioContext();
var source = context.createMediaStreamSource(stream);
var proc = context.createScriptProcessor(2048, 2, 2);
source.connect(proc);
proc.onaudioprocess = function(e) {
console.log("audio data collected");
let audioData = new Blob(e.inputBuffer.getChannelData(0), {type: 'audio/ogg' } )
|| new Blob(new Float32Array(2048), {type: 'audio/ogg'});
var speaker = document.getElementById('speaker');
let url = URL.createObjectURL(audioData);
speaker.src = url;
speaker.load();
speaker.play().then(
() => { console.log("Playback success!"); },
(error) => { console.log("Playback failure... ", error); }
);
};
}
).catch( (error) => {
console.error("couldn't get user media.");
});
}
It can record non-trivial audio data (i.e. not every collection winds up as a Blob made from the new Float32Array(2048) call), but it can't play it back. It never hits the "could not get user media" catch, but it always hits the "Playback Failure..." catch. The error prints like this:
DOMException [NotSupportedError: "The media resource indicated by the src attribute or assigned media provider object was not suitable."
code: 9
nsresult: 0x806e0003]
Additionally, the message Media resource blob:null/<long uuid> could not be decoded. is printed to the console repeatedly.
There are two things that could be going on here, near as I can tell (maybe both):
I'm not encoding the audio. I'm not sure if this is a problem, since I thought that data collected from the mic came with 'ogg' encoding automagically, and I've tried leaving the type property of my Blobs blank to no avail. If this is what's wrong, I don't know how to encode a chunk of audio given to me by the audioprocess event, and that's what I need to know.
An <audio> element is fundamentally incapable of playing audio fragments, even if properly encoded. Maybe by not having a full file, there's some missing or extraneous metadata that violates encoding standards and is preventing the browser from understanding me. If this is the case, maybe I need a different element, or even an entirely scripted solution. Or perhaps I'm supposed to construct a file-like object in-place for each chunk of audio data?
I've built this code on examples from MDN and SO answers, and I should mention I've tested my mic at this example demo and it appears to work perfectly.
The ultimate goal here is to stream this audio through a websocket to a server and relay it to other users. I DON'T want to use WebRTC if at all possible, because I don't want to limit myself to only web clients - once it's working okay, I'll make a desktop client as well.
Check example https://jsfiddle.net/greggman/g88v7p8c/ from https://stackoverflow.com/a/38280110/351900
Required to be run from HTTPS
navigator.getUserMedia = navigator.getUserMedia ||navigator.webkitGetUserMedia || navigator.mozGetUserMedia;
var aCtx;
var analyser;
var microphone;
if (navigator.getUserMedia) {
navigator.getUserMedia(
{audio: true},
function(stream) {
aCtx = new AudioContext();
microphone = aCtx.createMediaStreamSource(stream);
var destination=aCtx.destination;
microphone.connect(destination);
},
function(){ console.log("Error 003.")}
);
}
I'm looking to get the microphone activity level of a WebRTC MediaStream. However, I need to get this information without playing back the microphone to the user (otherwise there will be the loopback effect).
The answer in Microphone activity level of WebRTC MediaStream relies on the audio being played back to the user. How can I do this, without playing back the microphone?
Take a look at createGain method. It allows you to set stream's volume.
Here is my (simplified) example that I use in my project:
navigator.getUserMedia({audio: true, video: true}, function(stream) {
var audioContext = new AudioContext; //or webkitAudioContext
var source = audioContext.createMediaStreamSource(stream);
var volume = audioContext.createGain();
source.connect(volume);
volume.connect(audioContext.destination);
volume.gain.value = 0; //turn off the speakers
//further manipulations with source
}, function(err) {
console.log('error', err);
});