I want to make a simple audio only stream over WebRTC, using Peer.js. I'm running the simple PeerServer locally.
The following works perfectly fine in Firefox 30, but I can't get it to work in Chrome 35. I would expect there was something wrong with the PeerJS setup, but Chrome -> Firefox works perfectly fine, while Chrome -> Chrome seems to send the stream, but won't play over speakers.
Setting up getUserMedia Note: uncommenting those lines below will let me hear the loopback in Chrome and Firefox.
navigator.getUserMedia = (navigator.getUserMedia || navigator.webkitGetUserMedia || navigator.mozGetUserMedia || navigator.msGetUserMedia);
window.AudioContext = window.AudioContext || window.webkitAudioContext;
if(navigator.getUserMedia) {
navigator.getUserMedia({video: false, audio: true}, getMediaSuccess, getMediaError);
} else {
alert('getUserMedia not supported.');
}
var localMediaStream;
//var audioContext = new AudioContext();
function getMediaSuccess(mediaStream) {
//var microphone = audioContext.createMediaStreamSource(mediaStream);
//microphone.connect(audioContext.destination);
localMediaStream = mediaStream;
}
function getMediaError(err) {
alert('getUserMedia error. See console.');
console.error(err);
}
Making the connection
var peer = new Peer({host: '192.168.1.129', port: 9000});
peer.on('open', function(id) {
console.log('My ID:', id);
});
peer.on('call', function(call) {
console.log('answering call with', localMediaStream);
call.answer(localMediaStream);
//THIS WORKS IN CHROME, localMediaStream exists
call.on('stream', function(stream) {
console.log('streamRecieved', stream);
//THIS WORKS IN CHROME, the stream has come through
var audioContext = new AudioContext();
var audioStream = audioContext.createMediaStreamSource(stream);
audioStream.connect(audioContext.destination);
//I HEAR AUDIO IN FIREFOX, BUT NOT CHROME
});
call.on('error', function(err) {
console.log(err);
//LOGS NO ERRORS
});
});
function connect(id) {
var voiceStream = peer.call(id, localMediaStream);
}
This still appears to be an issue even in Chrome 73.
The solution that saved me for now is to also connect the media stream to a muted HTML audio element. This seems to make the stream work and audio starts flowing into the WebAudio nodes.
This would look something like:
let a = new Audio();
a.muted = true;
a.srcObject = stream;
a.addEventListener('canplaythrough', () => {
a = null;
});
let audioStream = audioContext.createMediaStreamSource(stream);
audioStream.connect(audioContext.destination);
JSFiddle: https://jsfiddle.net/jmcker/4naq5ozc/
Original Chromium issue and workaround:
https://bugs.chromium.org/p/chromium/issues/detail?id=121673#c121
New Chromium issue: https://bugs.chromium.org/p/chromium/issues/detail?id=687574 https://bugs.chromium.org/p/chromium/issues/detail?id=933677
In Chrome, it is a known bug currently where remote audio streams gathered from a peer connection are not accessible through the AudioAPI.
Latest comment on the bug:
We are working really hard towards the feature. The reason why this
takes long time is that we need to move the APM to chrome first,
implement a render mixer to get the unmixed data from WebRtc, then we
can hook up the remote audio stream to webaudio.
It was recently patched in Firefox as I remember this being an issue on there as well in the past.
I was unable to play the stream using web audio but I did manage to play it uses a basic audio element:
var audio = new Audio();
audio.src = (URL || webkitURL || mozURL).createObjectURL(remoteStream);
audio.play();
Related
So I am having a slight issue with WebRTC and video streaming. I am setting the stream to a video object like so:
let videoElement = document.createElement('video');
video.muted = true;
video.srcObject = event.streams[0];
video.onloadedmetadata = function (e) {
//Is Never Executed
var playPromise = video.play();
}
The event.streams is from a remote client. It works very well when two clients are in the same network, ie on a router in a home, or they are in the same time town under different networks. But it fails when trying to connect to a client that is across the country and the onloadedmetadata is never called.
What could be causing it work with connecting two clients that are near each but fail with distance?
I'm developing a website where the user can send audio commands which are captured with getUserMedia (only audio) and interpreted in the backend with a Speech-to-Text service. In order to keep the latency as low as possible, I'm sending small audio chunks to my server. This is working just fine on Chrome/Firefox and even Edge. However, I'm struggling with iOS Safari. I know that Safari is my only choice on Apple devices because of the missing WebRTC support on iOS Chrome/Firefox.
The problem is that I normally get the user's voice a couple of times (for some commands). But without any pattern the stream then suddenly contains only empty bytes. I tried a lot of different strategies but in general I stuck to the following plan:
After user clicks a button, call getUserMedia (with audio constraint) and save stream to a variable
Create AudioContext (incl. Gain, MediaStreamSource, ScriptProcess) and connect the audio stream to the MediaStreamSource
Register an event listener to the ScriptProcessor and send audio chunks in callback to the server
When a result is returned from the server close AudioContext and audio's MediaStream
The interesting part is now what happens after a subsequent user command. I tried various things: Call getUserMedia again for each call and close the MediaStream track each time, use the initially created MediaStream and reconnect the EventHandler every time, close the AudioContext after every call, use only one initially created AudioContext... All my attempts failed so far, because I either got empty bytes from the Stream or the AudioContext was created in a "suspended" state. Only closing MediaStream/AudioContext and creating it every time again seems to be more stable, but fetching the MediaStream with getUserMedia takes quite a while on iOS (~1,5-2s), which gives a bad user experience.
I'll show you my latest attempt where I tried to mute/disable the stream in between user commands and keep the AudioContext open:
var audioStream: MediaStream;
var audioContext: AudioContext;
var startButton = document.getElementById("startButton");
startButton.onclick = () => {
if (!audioStream) {
getUserAudioStream();
} else {
// mute/disable stream
audioStream.getAudioTracks()[0].enabled = true;
}
}
var stopButton = document.getElementById("stopButton");
stopButton.onclick = () => {
// unmute/enable stream
audioStream.getAudioTracks()[0].enabled = false;
}
function getUserAudioStream(): Promise<any> {
return navigator.mediaDevices.getUserMedia({
audio: true
} as MediaTrackConstraints,
}).then((stream: MediaStream) => {
audioStream = stream;
startRecording();
}).catch((e) => { ... });
}
const startRecording = () => {
const ctx = (window as any).AudioContext || (window as any).webkitAudioContext;
if (!ctx) {
console.error("No Audio Context available in browser.");
return;
} else {
audioContext = new ctx();
}
const inputPoint = audioContext.createGain();
const microphone = audioContext.createMediaStreamSource(audioStream);
scriptProcessor = inputPoint.context.createScriptProcessor(4096, 1, 1);
microphone.connect(inputPoint);
inputPoint.connect(scriptProcessor);
scriptProcessor.connect(inputPoint.context.destination);
scriptProcessor.addEventListener("audioprocess", streamCallback);
};
const streamCallback = (e) => {
const samples = e.inputBuffer.getChannelData(0);
// Here I stream audio chunks to the server and
// observe that buffer sometimes only contains empty bytes...
}
I hope the snippet makes sense to you, because I let some stuff out to keep it readable. I think I made clear that this is only one of many attempts and actually my question is: Is there some kind of special characteristic in WebRTC/getUserMedia on iOS that I missed so far? Why does iOS treat MediaStream differently than Chrome/Firefox on Windows? As a last comment: I know that the ScriptProcessorNode is no longer recommended. Actually, I'd like to use MediaRecorder for that but this is also not yet supported on iOS. Also, the polyfill I know is not really suitable because it only support ogg for streaming audio and which also leads to problems because I would need to set the sample rate for that to a fixed value.
I am trying to play a video on iOS while listening to only one side of the stereo audio.
Code below works fine on desktop Chrome but not on Safari on iPhone5 with 8.3 iOS.
var AudioContext = window.webkitAudioContext;
var audioCtx = new AudioContext();
var splitter = audioCtx.createChannelSplitter(2);
var merger = audioCtx.createChannelMerger(1);
source = audioCtx.createMediaElementSource(video);
source.connect(splitter);
splitter.connect(merger, 0);
merger.connect(audioCtx.destination);
'video' is the reference to the DOM video element.
Any help would be much appreciated
thanks, a lot
Sa'ar
Here is a polyfill that checks whether Web audio exists and whether to use -webkit or not.
//borrowed from underscore.js
function isUndef(val){
return val === void 0;
}
if (isUndef(window.AudioContext)){
window.AudioContext = window.webkitAudioContext;
}
if (!isUndef(AudioContext)){
audioContext = new AudioContext();
} else {
throw new Error("Web Audio is not supported in this browser");
}
Also there is a similar IOS error here that might help.
Check what kind of audio context the device uses instead of using ||. For example:
var AudioContext
if('webkitAudioContext' in window) {
AudioContext = new webkitAudioContext();
}
else{
AudioContext = new AudioContext ();
}
I'm trying to mute only the local audio playback in WebRTC, more specifically after getUserMedia() and prior to any server connection being made. None of the options I've found work; this one from Muaz Khan fails:
var audioTracks = localMediaStream.getAudioTracks();
// if MediaStream has reference to microphone
if (audioTracks[0]) {
audioTracks[0].enabled = false;
}
source
This technique is also described here as "working", but fails here on Chrome Version 39.0.2171.95 (64-bit) (Ubuntu 14.04).
Additional method that is said to work by using volume gain:
window.AudioContext = window.AudioContext || window.webkitAudioContext;
var audioContext = new AudioContext();
var source = audioContext.createMediaStreamSource(clientStream);
var volume = audioContext.createGain();
source.connect(volume);
volume.connect(audioContext.destination);
volume.gain.value = 0; //turn off the speakers
tl;dr I don't want to hear the input from my microphone on my speakers, but I do want to see my video image.
Workaround
This workaround was suggested by Benjamin Trent and it mutes the audio by setting the muted attribute on the video tag like so:
document.getElementById("html5vid").muted = true;
Also similar question, but its for video, so not the same
Adding this as the answer because it is the de facto correct answer:
What you stated as a workaround is what's used by many major WebRTC Video platforms:
navigator.mediaDevices.getUserMedia({ video: true, audio: true })
.then(stream => {
const vid = document.getElementById('html5vid');
vid.autoplay = true;
vid.muted = true;
vid.srcObject = stream;
});
I've been toying with WebRTC but I'm completely unable to play a simple audio stream after properly granting rights to the browser to use the input device.
I just try to connect the input device to the context destination, but it doesn't work.
This snippet isn't working and I think it should:
function success(stream)
{
var audioContext = new webkitAudioContext();
var mediaStreamSource = audioContext.createMediaStreamSource(stream);
mediaStreamSource.connect(audioContext.destination);
}
navigator.webkitGetUserMedia({audio:true, video:false}, success);
This doesn't seem to capture any sound from my working microphone, but if I use a simple tag and create a blob url the code suddenly starts working.
function success(stream)
{
audio = document.querySelector('audio');
audio.src = window.URL.createObjectURL(stream);
audio.play();
}
navigator.webkitGetUserMedia({audio:true, video:false}, success);
Also, not a single of these demos seems to be working for me: http://webaudiodemos.appspot.com/.
Fiddle for the first snippet: http://jsfiddle.net/AvMtt/
Fiddle for the second snippet: http://jsfiddle.net/vxeDg/
Using Chrome 28.0.1500.71 beta-m on Windows 7x64.
I have a single input device, and two output devices (speakers, headsets). Every device is using the same sample rate.
This question is almost 6 years old, but for anyone who stumbles across it, the modern version of this looks something like:
function success(stream) {
let audioContext = new AudioContext();
let mediaStreamSource = audioContext.createMediaStreamSource(stream);
mediaStreamSource.connect(audioContext.destination);
}
navigator.mediaDevices.getUserMedia({audio: true, video: false})
.then(success)
.catch((e) => {
console.dir(e);
});
And appears to work based on https://jsfiddle.net/jmcker/g3j1yo85