Unable to play sound in Google Chrome using a MediaStreamAudioSourceNode - javascript

I've been toying with WebRTC but I'm completely unable to play a simple audio stream after properly granting rights to the browser to use the input device.
I just try to connect the input device to the context destination, but it doesn't work.
This snippet isn't working and I think it should:
function success(stream)
{
var audioContext = new webkitAudioContext();
var mediaStreamSource = audioContext.createMediaStreamSource(stream);
mediaStreamSource.connect(audioContext.destination);
}
navigator.webkitGetUserMedia({audio:true, video:false}, success);
This doesn't seem to capture any sound from my working microphone, but if I use a simple tag and create a blob url the code suddenly starts working.
function success(stream)
{
audio = document.querySelector('audio');
audio.src = window.URL.createObjectURL(stream);
audio.play();
}
navigator.webkitGetUserMedia({audio:true, video:false}, success);
Also, not a single of these demos seems to be working for me: http://webaudiodemos.appspot.com/.
Fiddle for the first snippet: http://jsfiddle.net/AvMtt/
Fiddle for the second snippet: http://jsfiddle.net/vxeDg/
Using Chrome 28.0.1500.71 beta-m on Windows 7x64.
I have a single input device, and two output devices (speakers, headsets). Every device is using the same sample rate.

This question is almost 6 years old, but for anyone who stumbles across it, the modern version of this looks something like:
function success(stream) {
let audioContext = new AudioContext();
let mediaStreamSource = audioContext.createMediaStreamSource(stream);
mediaStreamSource.connect(audioContext.destination);
}
navigator.mediaDevices.getUserMedia({audio: true, video: false})
.then(success)
.catch((e) => {
console.dir(e);
});
And appears to work based on https://jsfiddle.net/jmcker/g3j1yo85

Related

Streaming microphone input with getUserMedia under iOS Safari

I'm developing a website where the user can send audio commands which are captured with getUserMedia (only audio) and interpreted in the backend with a Speech-to-Text service. In order to keep the latency as low as possible, I'm sending small audio chunks to my server. This is working just fine on Chrome/Firefox and even Edge. However, I'm struggling with iOS Safari. I know that Safari is my only choice on Apple devices because of the missing WebRTC support on iOS Chrome/Firefox.
The problem is that I normally get the user's voice a couple of times (for some commands). But without any pattern the stream then suddenly contains only empty bytes. I tried a lot of different strategies but in general I stuck to the following plan:
After user clicks a button, call getUserMedia (with audio constraint) and save stream to a variable
Create AudioContext (incl. Gain, MediaStreamSource, ScriptProcess) and connect the audio stream to the MediaStreamSource
Register an event listener to the ScriptProcessor and send audio chunks in callback to the server
When a result is returned from the server close AudioContext and audio's MediaStream
The interesting part is now what happens after a subsequent user command. I tried various things: Call getUserMedia again for each call and close the MediaStream track each time, use the initially created MediaStream and reconnect the EventHandler every time, close the AudioContext after every call, use only one initially created AudioContext... All my attempts failed so far, because I either got empty bytes from the Stream or the AudioContext was created in a "suspended" state. Only closing MediaStream/AudioContext and creating it every time again seems to be more stable, but fetching the MediaStream with getUserMedia takes quite a while on iOS (~1,5-2s), which gives a bad user experience.
I'll show you my latest attempt where I tried to mute/disable the stream in between user commands and keep the AudioContext open:
var audioStream: MediaStream;
var audioContext: AudioContext;
var startButton = document.getElementById("startButton");
startButton.onclick = () => {
if (!audioStream) {
getUserAudioStream();
} else {
// mute/disable stream
audioStream.getAudioTracks()[0].enabled = true;
}
}
var stopButton = document.getElementById("stopButton");
stopButton.onclick = () => {
// unmute/enable stream
audioStream.getAudioTracks()[0].enabled = false;
}
function getUserAudioStream(): Promise<any> {
return navigator.mediaDevices.getUserMedia({
audio: true
} as MediaTrackConstraints,
}).then((stream: MediaStream) => {
audioStream = stream;
startRecording();
}).catch((e) => { ... });
}
const startRecording = () => {
const ctx = (window as any).AudioContext || (window as any).webkitAudioContext;
if (!ctx) {
console.error("No Audio Context available in browser.");
return;
} else {
audioContext = new ctx();
}
const inputPoint = audioContext.createGain();
const microphone = audioContext.createMediaStreamSource(audioStream);
scriptProcessor = inputPoint.context.createScriptProcessor(4096, 1, 1);
microphone.connect(inputPoint);
inputPoint.connect(scriptProcessor);
scriptProcessor.connect(inputPoint.context.destination);
scriptProcessor.addEventListener("audioprocess", streamCallback);
};
const streamCallback = (e) => {
const samples = e.inputBuffer.getChannelData(0);
// Here I stream audio chunks to the server and
// observe that buffer sometimes only contains empty bytes...
}
I hope the snippet makes sense to you, because I let some stuff out to keep it readable. I think I made clear that this is only one of many attempts and actually my question is: Is there some kind of special characteristic in WebRTC/getUserMedia on iOS that I missed so far? Why does iOS treat MediaStream differently than Chrome/Firefox on Windows? As a last comment: I know that the ScriptProcessorNode is no longer recommended. Actually, I'd like to use MediaRecorder for that but this is also not yet supported on iOS. Also, the polyfill I know is not really suitable because it only support ogg for streaming audio and which also leads to problems because I would need to set the sample rate for that to a fixed value.

Play Mic audio back continuously

I see a lot of questions for how to record audio then stop recording, then play audio or save it to a file, but none of this is what I want.
tl;dr Here's my question in a nutshell: "How can I immediately play audio recorded from the user's microphone?" That is, I don't want to save a recording and play it when the user hits a "Play" button, I don't want to save a recording to a file on the user's computer and I don't want to use WebRTC to stream audio anywhere. I just want to talk into my microphone and hear my voice come out the speakers.
All I'm trying to do is make a very simple "echo" page that just immediately plays back audio recorded from the mic. I started using a mediaRecorder object, but that wasn't working and from what I can tell that's meant for recording full audio files, so I switched to an AudioContext-based approach.
A very simple page would just look like this:
<!DOCTYPE html>
<head>
<script type="text/javascript" src="mcve.js"></script>
</head>
<body>
<audio id="speaker" volume="1.0"></audio>
</body>
and the script looks like this:
if (navigator.mediaDevices) {
var constrains = {audio: true};
navigator.mediaDevices.getUserMedia(constrains).then(
function (stream) {
var context = new AudioContext();
var source = context.createMediaStreamSource(stream);
var proc = context.createScriptProcessor(2048, 2, 2);
source.connect(proc);
proc.onaudioprocess = function(e) {
console.log("audio data collected");
let audioData = new Blob(e.inputBuffer.getChannelData(0), {type: 'audio/ogg' } )
|| new Blob(new Float32Array(2048), {type: 'audio/ogg'});
var speaker = document.getElementById('speaker');
let url = URL.createObjectURL(audioData);
speaker.src = url;
speaker.load();
speaker.play().then(
() => { console.log("Playback success!"); },
(error) => { console.log("Playback failure... ", error); }
);
};
}
).catch( (error) => {
console.error("couldn't get user media.");
});
}
It can record non-trivial audio data (i.e. not every collection winds up as a Blob made from the new Float32Array(2048) call), but it can't play it back. It never hits the "could not get user media" catch, but it always hits the "Playback Failure..." catch. The error prints like this:
DOMException [NotSupportedError: "The media resource indicated by the src attribute or assigned media provider object was not suitable."
code: 9
nsresult: 0x806e0003]
Additionally, the message Media resource blob:null/<long uuid> could not be decoded. is printed to the console repeatedly.
There are two things that could be going on here, near as I can tell (maybe both):
I'm not encoding the audio. I'm not sure if this is a problem, since I thought that data collected from the mic came with 'ogg' encoding automagically, and I've tried leaving the type property of my Blobs blank to no avail. If this is what's wrong, I don't know how to encode a chunk of audio given to me by the audioprocess event, and that's what I need to know.
An <audio> element is fundamentally incapable of playing audio fragments, even if properly encoded. Maybe by not having a full file, there's some missing or extraneous metadata that violates encoding standards and is preventing the browser from understanding me. If this is the case, maybe I need a different element, or even an entirely scripted solution. Or perhaps I'm supposed to construct a file-like object in-place for each chunk of audio data?
I've built this code on examples from MDN and SO answers, and I should mention I've tested my mic at this example demo and it appears to work perfectly.
The ultimate goal here is to stream this audio through a websocket to a server and relay it to other users. I DON'T want to use WebRTC if at all possible, because I don't want to limit myself to only web clients - once it's working okay, I'll make a desktop client as well.
Check example https://jsfiddle.net/greggman/g88v7p8c/ from https://stackoverflow.com/a/38280110/351900
Required to be run from HTTPS
navigator.getUserMedia = navigator.getUserMedia ||navigator.webkitGetUserMedia || navigator.mozGetUserMedia;
var aCtx;
var analyser;
var microphone;
if (navigator.getUserMedia) {
navigator.getUserMedia(
{audio: true},
function(stream) {
aCtx = new AudioContext();
microphone = aCtx.createMediaStreamSource(stream);
var destination=aCtx.destination;
microphone.connect(destination);
},
function(){ console.log("Error 003.")}
);
}

Change sample rate of AudioContext (getUserMedia)

Im trying to record a 48000Hz recording via getUserMedia. But without luck. The returned audio MediaStream returns 44100Hz. How can i set this to 48000Hz?
Here are snippets of my code:
var startUsermedia = this.startUsermedia;
navigator.getUserMedia({
audio: true,
//sampleRate: 48000
}, startUsermedia, function (e) {
console.log('No live audio input: ' + e);
});
The startUsermedia function:
startUsermedia: function (stream) {
var input = audio_context.createMediaStreamSource(stream);
console.log('Media stream created.');
// Uncomment if you want the audio to feedback directly
//input.connect(audio_context.destination);
//__log('Input connected to audio context destination.');
recorder = new Recorder(input);
console.log('Recorder initialised.');
},
I tried changing the property sampleRate of the AudioContext, but no luck.
How can i change the sampleRate to 48000Hz?
EDIT : We are also now okay with a flash solution that can record and export wav files at 48000Hz
As far as I know, there is no way to change the sample rate within an audio context. The sample rate will usually be the sample rate of your recording device and will stay that way. So you will not be able to write something like this:
var input = audio_context.createMediaStreamSource(stream);
var resampler = new Resampler(44100, 48000);
input.connect(resampler);
resampler.connect(audio_context.destination);
However, if you want to take your audio stream, resample it and then send it to the backend (or do sth. else with it outside of the Web Audio API), you can use an external sample rate converter (e.g. https://github.com/taisel/XAudioJS/blob/master/resampler.js).
var resampler = new Resampler(44100, 48000, 1, 2229);
function startUsermedia(stream) {
var input = audio_context.createMediaStreamSource(stream);
console.log('Media stream created.');
recorder = audio_context.createScriptProcessor(2048);
recorder.onaudioprocess = recorderProcess;
recorder.connect(audio_context.destination);
}
function recorderProcess(e) {
var buffer = e.inputBuffer.getChannelData(0);
var resampled = resampler.resampler(buffer);
//--> do sth with the resampled data for instance send to server
}
It looks like there is an open bug about the inability to set the sampling rate:
https://github.com/WebAudio/web-audio-api/issues/300
There's also a Chrome issue:
https://bugs.chromium.org/p/chromium/issues/detail?id=432248
I checked the latest Chromium code and there is nothing in there that lets you set the sampling rate.
Edit: Seems like it has been implemented in Chrome, but is broken currently - see the comments in the Chromium issue.
it's been added to chrome:
var ctx = new (window.AudioContext || window.webkitAudioContext)({ sampleRate:16000});
https://developer.mozilla.org/en-US/docs/Web/API/AudioContext/AudioContext
audioContext = new AudioContext({sampleRate: 48000})
Simply Set sample rate when created AudioContext object, This worked for me
NOTE: This answer is outdated.
You can't. The sample rate of the AudioContext is set by the browser/device and there is nothing you can do to change it. In fact, you will find that 44.1kHz on your machine might be 48kHz on mine. It varies to whatever the OS picks by default.
Also remember that not all hardware is capable of all sample rates.
You can use an OfflineAudioContext to essentially render your audio buffer to a different sample rate (but this is batch operation).
So you would record your recording using the normal audio context, and then use an OfflineAudioContext with a different sample rate to render your buffer. There is an example on the Mozilla page.
It is now in the spec but not yet implemented in Chromium.
Also in bugs.chromium.org, "Status: Available" does not mean it is implemented. It just means that nobody is working on it and that it is available for anyone who wants to work on it. So "Available" means "Not assigned".

Detect if audio is playing in browser Javascript

Is there a global way to detect when audio is playing or starts playing in the browser.
something like along the idea of if(window.mediaPlaying()){...
without having the code tied to a specific element?
EDIT: What's important here is to be able to detect ANY audio no matter where the audio comes from. Whether it comes from an iframe, a video, the Web Audio API, etc.
No one should use this but it works.
Basically the only way that I found to access the entire window's audio is using MediaDevices.getDisplayMedia().
From there a MediaStream can be fed into an AnalyserNode that can be used to check the if the audio volume is greater than zero.
Only works in Chrome and maybe Edge (Only tested in Chrome 80 on Linux)
JSFiddle with <video>, <audio> and YouTube!
Important bits of code (cannot post in a working snippet because of the Feature Policies on the snippet iframe):
var audioCtx = new AudioContext();
var analyser = audioCtx.createAnalyser();
var bufferLength = analyser.fftSize;
var dataArray = new Float32Array(bufferLength);
window.isAudioPlaying = () => {
analyser.getFloatTimeDomainData(dataArray);
for (var i = 0; i < bufferLength; i++) {
if (dataArray[i] != 0) return true;
}
return false;
}
navigator.mediaDevices.getDisplayMedia({
video: true,
audio: true
})
.then(stream => {
if (stream.getAudioTracks().length > 0) {
var source = audioCtx.createMediaStreamSource(stream);
source.connect(analyser);
document.body.classList.add('ready');
} else {
console.log('Failed to get stream. Audio not shared or browser not supported');
}
}).catch(err => console.log("Unable to open capture: ", err));
I read all MDN docs about Web Audio API but I didn't find any global flag on window that shows audio playing. But I have found a tricky way that shows ANY audio playing, no matter an iframe or video but about Web Audio API:
const allAudio = Array.from( document.querySelectorAll('audio') );
const allVideo = Array.from( document.querySelectorAll('video') );
const isPlaying = [...allAudio, ...allVideo].some(item => !item.paused);
Now, by the isPlaying flag we can detect if any audio or video is playing in the browser.
There is a playbackState property (https://developer.mozilla.org/en-US/docs/Web/API/MediaSession/playbackState), but not all browsers support it.
if(navigator.mediaSession.playbackState === "playing"){...
I was looking for a solution in Google, but i didn't find anything yet.
Maybe you could check some data that has X value only when audio is playing. If you have some button that start playing the audio file, maybe you can be sure that the audio is playing by adding some event listener on the rep. button...
Maybe something like adding an event listener to the "audio" tag? If i remember correctly, audio tag has a "paused" attribute...
And now i just remember that the audio has "paused" attribute...
Also, you may want to check this topic HTML5 check if audio is playing?
i jus find it five seconds ago jaja

Chrome won't play WebAudio getUserMedia via WebRTC/Peer.js

I want to make a simple audio only stream over WebRTC, using Peer.js. I'm running the simple PeerServer locally.
The following works perfectly fine in Firefox 30, but I can't get it to work in Chrome 35. I would expect there was something wrong with the PeerJS setup, but Chrome -> Firefox works perfectly fine, while Chrome -> Chrome seems to send the stream, but won't play over speakers.
Setting up getUserMedia Note: uncommenting those lines below will let me hear the loopback in Chrome and Firefox.
navigator.getUserMedia = (navigator.getUserMedia || navigator.webkitGetUserMedia || navigator.mozGetUserMedia || navigator.msGetUserMedia);
window.AudioContext = window.AudioContext || window.webkitAudioContext;
if(navigator.getUserMedia) {
navigator.getUserMedia({video: false, audio: true}, getMediaSuccess, getMediaError);
} else {
alert('getUserMedia not supported.');
}
var localMediaStream;
//var audioContext = new AudioContext();
function getMediaSuccess(mediaStream) {
//var microphone = audioContext.createMediaStreamSource(mediaStream);
//microphone.connect(audioContext.destination);
localMediaStream = mediaStream;
}
function getMediaError(err) {
alert('getUserMedia error. See console.');
console.error(err);
}
Making the connection
var peer = new Peer({host: '192.168.1.129', port: 9000});
peer.on('open', function(id) {
console.log('My ID:', id);
});
peer.on('call', function(call) {
console.log('answering call with', localMediaStream);
call.answer(localMediaStream);
//THIS WORKS IN CHROME, localMediaStream exists
call.on('stream', function(stream) {
console.log('streamRecieved', stream);
//THIS WORKS IN CHROME, the stream has come through
var audioContext = new AudioContext();
var audioStream = audioContext.createMediaStreamSource(stream);
audioStream.connect(audioContext.destination);
//I HEAR AUDIO IN FIREFOX, BUT NOT CHROME
});
call.on('error', function(err) {
console.log(err);
//LOGS NO ERRORS
});
});
function connect(id) {
var voiceStream = peer.call(id, localMediaStream);
}
This still appears to be an issue even in Chrome 73.
The solution that saved me for now is to also connect the media stream to a muted HTML audio element. This seems to make the stream work and audio starts flowing into the WebAudio nodes.
This would look something like:
let a = new Audio();
a.muted = true;
a.srcObject = stream;
a.addEventListener('canplaythrough', () => {
a = null;
});
let audioStream = audioContext.createMediaStreamSource(stream);
audioStream.connect(audioContext.destination);
JSFiddle: https://jsfiddle.net/jmcker/4naq5ozc/
Original Chromium issue and workaround:
https://bugs.chromium.org/p/chromium/issues/detail?id=121673#c121
New Chromium issue: https://bugs.chromium.org/p/chromium/issues/detail?id=687574 https://bugs.chromium.org/p/chromium/issues/detail?id=933677
In Chrome, it is a known bug currently where remote audio streams gathered from a peer connection are not accessible through the AudioAPI.
Latest comment on the bug:
We are working really hard towards the feature. The reason why this
takes long time is that we need to move the APM to chrome first,
implement a render mixer to get the unmixed data from WebRtc, then we
can hook up the remote audio stream to webaudio.
It was recently patched in Firefox as I remember this being an issue on there as well in the past.
I was unable to play the stream using web audio but I did manage to play it uses a basic audio element:
var audio = new Audio();
audio.src = (URL || webkitURL || mozURL).createObjectURL(remoteStream);
audio.play();

Categories