I'm using javascript getUserMedia() to get access to the users microphone and record the audio with recorder.js
Everything is working fine except that the sound level is very low on mobile (Tested safari and chrome on IOS) but perfect on desktop (Chrome, FF, Safari).
I have tried to adjust the gain with gainNode = audioContext.createGain(); and that will effect the level from 0.0 (no sound) to 1.0 (normal sound) or higher (distorted sound). Problem is that 1.0 i perfect on desktop but very very low on mobile. If i go to e.g. gain=25 then the volume is much higher but also very distorted and therefore not useable.
Is it possible to get good and quality sound level on IOS and how?
Here is my script so fare:
var constraints = { audio: true, video:false }
navigator.mediaDevices.getUserMedia(constraints).then(function(stream) {
//Get mic input
audioContext = new AudioContext();
gumStream = stream;
input = audioContext.createMediaStreamSource(stream);
//Set gain level
gainNode = audioContext.createGain();
gainNode.gain.value = 1.0;
input.connect(gainNode);
//Handle recording
rec = new Recorder(gainNode);
rec.record();
//Audio visualizer
analyser = audioContext.createAnalyser();
freqs = new Uint8Array(analyser.frequencyBinCount);
input.connect(analyser);
requestAnimationFrame(visualize);
}).catch(function(err) {
});
UPDATE:
After a lot of research I found that the recording it self is fine. The problem is when i play the recorded audio without refreshing the browser the sound is very low. If i refresh the browser the sound is perfect.
What is really weird is if i write an alert() in my stopRecording() function, the sound plays perfect even without refreshing the browser.
Testet in IOS - Safari, could this be a IOS bug?
function stopRecording() {
rec.stop();
gumStream.getTracks().forEach(function(track) {
if (track.readyState == 'live' && track.kind === 'audio'){
track.stop();
}
});
alert('Recording is complete');
rec.exportWAV(handleRecording);
}
It's a bit like safari don't release or end the getUserMedia() and as long as that is 'on' the audio.play(); has low sound. Maybe the alert() changes browser focus and therefore it works(?)
I would really like to avoid having an alert there but don't know how this can be fixed.
Related
I've built a demo of a voice-assistant that takes microphone data, passes it to an analyzer, then uses .getByteFrequencyData() to show visuals. It works as follows:
Press mic button to connect to microphone input
Release mic button disconnects microphone stream, and plays MP3 of response.
When MP3 ends: return to standby, and wait for new button press to start step 1. again.
Live version here: https://dyadstudios.com/playground/daysi/
The way I've achieved this is as follows:
var audioContext = (window.AudioContext) ? new AudioContext() : new window["webkitAudioContext"]();
var analyser = audioContext.createAnalyser();
analyser.fftSize = Math.pow(2, 9); // 512
var sourceMic = undefined; // Microphone stream source
var sourceMp3 = undefined; // MP3 buffer source
// Browser requests mic access
window.navigator.mediaDevices.getUserMedia({audio: true}).then((stream) => {
sourceMic = audioContext.createMediaStreamSource(stream)
})
// 1. Mic button pressed, start listening
listen() {
audioContext.resume();
// Connect mic to analyser
if (sourceMic) {
sourceMic.connect(analyser);
}
}
// 2. Disconnect mic, play mp3
answer(mp3AudioBuffer) {
if (sourceMic) {
// Disconnect mic to prevent audio feedback
sourceMic.disconnect();
}
// Play mp3
sourceMp3 = audioContext.createBufferSource();
sourceMp3.onended = mp3StreamEnded;
sourceMp3.buffer = mp3AudioBuffer;
sourceMp3.connect(analyser);
sourceMp3.start(0);
// Connect to speakers to hear MP3
analyser.connect(audioContext.destination);
}
// 3. MP3 has ended
mp3StreamEnded() {
sourceMp3.disconnect();
// Disconnect speakers (prevents mic feedback)
analyser.disconnect();
}
It works perfectly well on Firefox and Chrome, but OSX Safari 12.1 only gets microphone data the first time I press the button. Whenever I press the mic button on a second pass, the analyzer no longer gets microphone data, but MP3 data still works. It seems like connecting, disconnecting, and re-connecting the mic's AudioNode to the analyzer breaks it somehow. I checked and Safari supports AudioNode.connect() as well as AudioNode.disconnect(). I know Safari's WebAudio implementation is a bit outdated, is there a workaround to fix this issue?
There is indeed a bug in Safari which causes it to drop the signal if a MediaStreamAudioSourceNode is disconnected for some time. You can avoid this by just not disconnecting it as long as you might need it again. You can use a GainNode instead to mute the signal.
You could do this by introducing a new variable to control the volume.
const sourceMicVolume = audioContext.createGain();
sourceMicVolume.gain.value = 0;
Then you need to connect everything right away when you instantiate the sourceMic.
sourceMic = audioContext.createMediaStreamSource(stream);
sourceMic.connect(sourceMicVolume);
sourceMicVolume.connect(analyser);
Inside your event handlers you would then only set the volume of the gain instead of (dis)connecting the nodes. Inside the listen() function that would look like this:
if (sourceMic) {
sourceMicVolume.gain.value = 1;
}
And inside the answer() function it would look like this:
if (sourceMic) {
sourceMicVolume.gain.value = 0;
}
I am attempting to play a play an audio blob within safari and it plays for a fraction of a second and I never hear any audio. The media element fires a "paused" event a tiny fraction of a second into playback (example : 0.038s ).
The blob is recorded in Chrome. Playback works just fine in Chrome and Firefox.
Also the reported duration of the media in safari is much shorter than what it should be. For example a given recording is 7.739 seconds and chrome recognizes the correct duration but safari shows a duration of 1.584. Or another had a duration of 9.96 but safari reported 6.552.
I have tried making sure this is not an issue with Safari preventing playback when not initiated by the user. So playback starts on a tap. I have also tried different mime types. mpeg. webm with h264 and vp8 codecs.
I have made sure that the download blob is the same size in safari as it is on chrome.
I have looked through a number of similar posts including the one with the answer by #lastmjs Loading audio via a Blob URL fails in Safari where there is a demo provided. The demo does work and I am doing more or less what is shown. I suspect the problem is on the record side.
Recorder:
self.mediaRecorder = new MediaRecorder(stream,{'audio' : {'sampleRate' : 22000}});
...assemble the chunks...
self.audioBlob = new Blob(self.audioChunks, {type: 'audio/webm; codecs=vp8'});
...upload the blob to cloud (S3)...
Player:
...in the success handler that downloads blob...
self.audioBlob = new Blob([data],{type: 'audio/webm'});
...I later prepare the element for playback...
let audioUrl = window.URL.createObjectURL(self.audioBlob);
let audioElement = document.createElement('audio');
let sourceElement = document.createElement('source');
audioElement.muted = true;
audioElement.appendChild(sourceElement);
sourceElement.src = audioUrl;
sourceElement.type = 'audio/webm';
document.body.appendChild(audioElement);
audioElement.load()
... when the user taps on a button...
self.audioElement.muted = false;
let playPromise = self.audioElement.play();
playPromise.then(()=>{
console.log("playing should have started: " + self.audioElement.muted + " - " + self.audioElement.paused);
});
...shortly after this - the paused event handler gets fired.
There are no error messages. I am trying this on Safari on Mac and on iOS. No errors. I also listen for the error event on the media element and nothing fires. It just doesnt play for very long. I am clearly missing something. Again capture and playback works great in Chrome. And playback works in Firefox. But playback in Safari just won't work. What should I try?
for everyone having the same problem, try changing 'audio/webm' to 'audio/wav'
I've been playing with a few different Web Audio API libraries, and I've been having mixed results. My favourite so far is Timbre.js. I'm generally getting a 'buzz' coming out of the speaker on iOS (even when using AudioContextMonkeyPatch). This sometimes does not happen. For example, reboot the phone, start the app, click the 'go' button, and the sound is identical (to my ears) as per my desktop browser. Make a change (eg. change tempo), and buzz buzz buzz. Generally though, the audio output is buzz buzz buzz.
Example code:
var freqs = T(function(count) {
return [220, 440, 660, 880][count % 4];
});
var osc = T("sin", {freq:freqs, mul:0.5});
var env = T("perc", {a:50, r:500}, osc).bang();
var interval = T("param", {value:500}).linTo(50, "30sec");
T("interval", {interval:interval}, freqs, env).start();
env.play();
I asked a similar question a little while after you (Distortion in WebAudio API in iOS9?) and believe I found an answer: WebKit Audio distorts on iOS 6 (iPhone 5) first time after power cycling
Summary: play an audio sample at the desired bitrate and then create a new context.
// inside the click/touch handler
var playInitSound = function playInitSound() {
var source = context.createBufferSource();
source.buffer = context.createBuffer(1, 1, 48000);
source.connect(context.destination);
if (source.start) {
source.start(0);
} else {
source.noteOn(0);
}
};
playInit();
if (context.sampleRate === 48000) {
context = new AudioContext();
playInit();
}
Editing to note that it's possible you'd have to do some hacking of Timbre.js to get this to work, but it at least worked for me in using Web Audio on its own.
I am playing around with getUserMedia to gain access to the users microphone in Chrome(Version 28.0.1500.72 m). I am able to record and play back the users input when they use an internal microphone with internal speakers.
As soon as I plug in a usb microphone headset I am no longer able to record the users input. I have switched the device in the chrome setting under privacy and content settings. So chrome does see the newly plugged in microphone. I have restarted chrome and tried it again after plugging in the mic as well. Still no user input.
Thanks In Advance.
Below is the current code I am using.
window.AudioContext = window.AudioContext||window.webkitAudioContext;
var html5Recorder;
var audioContext = new AudioContext();
navigator.getUserMedia = navigator.getUserMedia || navigator.webkitGetUserMedia || navigator.mozGetUserMedia || navigator.msGetUserMedia || navigator.oGetUserMedia;
if(navigator.getUserMedia){
navigator.getUserMedia({audio:true},handleAudioStream, audioError)
}else{
console.log('Use Flash')
}
function handleAudioStream(stream){
var mediaStream = audioContext.createMediaStreamSource(stream);
mediaStream.connect( audioContext.destination );
html5Recorder = new HTML5Recorder(mediaStream);
html5Recorder.stop();
html5Recorder.clear();
}
function audioError(error){
console.log(error);
}
function record(){
html5Recorder.record();
}
function stopRecording(){
html5Recorder.stop();
html5Recorder.exportWAV(function(e){
console.log(e);
console.log(window.URL.createObjectURL(e));
document.getElementById('audio1').src = window.URL.createObjectURL(e);
HTML5Recorder.forceDownload(e);
});
}
This was a bug in the current chrome build I was using (28). Chrome canary works fine.
Can you check the sampling rate on the two audio devices?
There is an existing bug that the non-default microphone only works if the sample rate is the same as the default microphone: https://code.google.com/p/chromium/issues/detail?id=164058.
Also, are you on OSX or Linux? The comments in the bug make it look like it should be fixed on Windows.
Try selecting the USB mic as the default one. Chrome does not mange audio devices and it always uses the default mic.
I've been experimenting with connecting an audio element to the web audio api using createMediaElementSource and got it to work but one thing I need to do is change the playback rate of the audio tag and I couldn't get that to work.
If you try to run the code below, you'll see that it works until you uncomment the line where we set the playback rate. When this line is in the audio gets muted.
I know I can set the playback rate on an AudioBufferSourceNode using source.playbackRate.value but this is not what I'd like to do, I need to set the playback rate on the audio element while it's connected to the web audio api using createMediaElementSource so I don't have any AudioBufferSourceNode.
Has anyone managed to do that?
var _source,
_audio,
_context,
_gainNode;
_context = new webkitAudioContext();
function play(url) {
if (_audio) {
_audio.pause();
}
_audio = new Audio(url);
//_audio.playbackRate = 0.6;
setTimeout(function() {
if (!_gainNode) {
_gainNode = _context.createGainNode();
_gainNode.gain.value = 0.1;
_gainNode.connect(_context.destination);
}
_source = _context.createMediaElementSource(_audio);
_source.connect(_gainNode);
_audio.play();
}, 0);
}
play("http://geo-samples.beatport.com/items/volumes/volume2/items/3000000/200000/40000/9000/400/60/3249465.LOFI.mp3");
setTimeout(function () {
_audio.pause();
}, 4000);
You have to set the playback rate after the audio has started playing. The only portable way I have found to make this work, is by waiting until you get a timeupdate event with valid currentTime:
_audio.addEventListener('timeupdate', function(){
_if(!isNaN(audio.currentTime)) {
_audio.playbackRate = 0.6;
}
});
Note that playback rate isn't currently supported on android and that Chrome (on desktop) doesn't support playback rates lower than 0.5.
Which browser are you using to test this? It seems this is not yet implemented in Firefox, but should be working on Chrome.
Mozilla bug for implementing playbackRate:
https://bugzilla.mozilla.org/show_bug.cgi?id=495040