I've been playing with a few different Web Audio API libraries, and I've been having mixed results. My favourite so far is Timbre.js. I'm generally getting a 'buzz' coming out of the speaker on iOS (even when using AudioContextMonkeyPatch). This sometimes does not happen. For example, reboot the phone, start the app, click the 'go' button, and the sound is identical (to my ears) as per my desktop browser. Make a change (eg. change tempo), and buzz buzz buzz. Generally though, the audio output is buzz buzz buzz.
Example code:
var freqs = T(function(count) {
return [220, 440, 660, 880][count % 4];
});
var osc = T("sin", {freq:freqs, mul:0.5});
var env = T("perc", {a:50, r:500}, osc).bang();
var interval = T("param", {value:500}).linTo(50, "30sec");
T("interval", {interval:interval}, freqs, env).start();
env.play();
I asked a similar question a little while after you (Distortion in WebAudio API in iOS9?) and believe I found an answer: WebKit Audio distorts on iOS 6 (iPhone 5) first time after power cycling
Summary: play an audio sample at the desired bitrate and then create a new context.
// inside the click/touch handler
var playInitSound = function playInitSound() {
var source = context.createBufferSource();
source.buffer = context.createBuffer(1, 1, 48000);
source.connect(context.destination);
if (source.start) {
source.start(0);
} else {
source.noteOn(0);
}
};
playInit();
if (context.sampleRate === 48000) {
context = new AudioContext();
playInit();
}
Editing to note that it's possible you'd have to do some hacking of Timbre.js to get this to work, but it at least worked for me in using Web Audio on its own.
Related
Our website records audio and plays it back for a user. It has worked for years with many different devices, but it started failing on the iPhone 14. I created a test app at https://nmp-recording-test.netlify.app/ so I can see what is going on. It works perfectly on all devices but it only works the first time on an iPhone 14. It works on other iPhones and it works on iPad and MacBooks using Safari or any other browser.
It looks like it will record if that is the first audio you ever do. If I get an AudioContext somewhere else the audio playback will work for that, but then the recording won't.
The only symptom I can see is that it doesn't call MediaRecorder.ondataavailable when it is not working, but I assume that is because it isn't recording.
Here is the pattern that I'm seeing with my test site:
Click "new recording". (the level indicator moves, the data available callback is triggered)
Click "listen" I hear what I just did
Click "new recording". (no levels move, no data is reported)
Click "listen" nothing is played.
But if I do anything, like click the metronome on and off then it won't record the FIRST time, either.
The "O.G. Recording" is the original way I was doing the recording, using deprecated method createMediaStreamSource() and createScriptProcessor()/createJavaScriptNode(). I thought maybe iPhone finally got rid of that, so I created the MediaRecorder version.
What I'm doing, basically, is (truncated to show the important part):
const chunks = []
function onSuccess(stream: MediaStream) {
mediaRecorder = new MediaRecorder(stream);
mediaRecorder.ondataavailable = function (e) {
chunks.push(e.data);
}
mediaRecorder.start(1000);
}
navigator.mediaDevices.getUserMedia({ audio: true }).then(onSuccess, onError);
Has anyone else seen anything different in the way the iPhone 14 handles recording?
Does anyone have a suggestion about how to debug this?
If you have an iPhone 14, would you try my test program above and let me know if you get the same results? We only have one iPhone 14 to test with, and maybe there is something weird about that device.
If it works you should see a number of lines something like data {"len":6784} appear every second when you are recording.
--- EDIT ---
I reworked the code similar to Frank zeng's suggestion and I am getting it to record, but it is still not right. The volume is really low, it looks like there are some dropouts, and there is a really long pause when resuming the AudioContext.
The new code seems to work perfectly in the other devices and browsers I have access to.
--- EDIT 2 ---
There were two problems - one is that the deprecated use of createScriptProcessor stopped working but the second one was an iOS bug that was fixed in version 16.2. So rewriting to use the AudioWorklet was needed, but keeping the recording going once it is started is not needed.
I have the same problem as you,I think the API of AudioContent.createScriptProcessor is Invalid in Iphone14, I used new API About AudioWorkletNode to replace it. And don't closed the stream, Because the second recording session of iPhone 14 is too laggy, Remember to destroy the data after recording. After testing, I have solved this problem,Here's my code,
// get stream
window.navigator.mediaDevices.getUserMedia(options).then(async (stream) => {
// that.stream = stream
that.context = new AudioContext()
await that.context.resume()
const rate = that.context.sampleRate || 44100
that.mp3Encoder = new lamejs.Mp3Encoder(1, rate, 128)
that.mediaSource = that.context.createMediaStreamSource(stream)
// API开始逐步淘汰了,如果可用则继续用,如果不可用则采用worklet方案写入音频数据
if (that.context.createScriptProcessor && typeof that.context.createScriptProcessor === 'function') {
that.mediaProcessor = that.context.createScriptProcessor(0, 1, 1)
that.mediaProcessor.onaudioprocess = event => {
window.postMessage({ cmd: 'encode', buf: event.inputBuffer.getChannelData(0) }, '*')
that._decode(event.inputBuffer.getChannelData(0))
}
} else { // 采用新方案
that.mediaProcessor = await that.initWorklet()
}
resolve()
})
// content of audioworklet function
async initWorklet() {
try {
/*音频流数据分析节点*/
let audioWorkletNode;
/*---------------加载AudioWorkletProcessor模块并将其添加到当前的Worklet----------------------------*/
await this.context.audioWorklet.addModule('/get-voice-node.js');
/*---------------AudioWorkletNode绑定加载后的AudioWorkletProcessor---------------------------------*/
audioWorkletNode = new AudioWorkletNode(this.context, "get-voice-node");
/*-------------AudioWorkletNode和AudioWorkletProcessor通信使用MessagePort--------------------------*/
console.log('audioWorkletNode', audioWorkletNode)
const messagePort = audioWorkletNode.port;
messagePort.onmessage = (e) => {
let channelData = e.data[0];
window.postMessage({ cmd: 'encode', buf: channelData }, '*')
this._decode(channelData)
}
return audioWorkletNode;
} catch (e) {
console.log(e)
}
}
// content of get-voice-node.js, Remember to put it in the static resource directory
class GetVoiceNode extends AudioWorkletProcessor {
/*
* options由new AudioWorkletNode()时传递
* */
constructor() {
super()
}
/*
* `inputList`和outputList`都是输入或输出的数组
* 比较坑的是只有128个样本???如何设置
* */
process (inputList, outputList, parameters) {
// console.log(inputList)
if(inputList.length>0&&inputList[0].length>0){
this.port.postMessage(inputList[0]);
}
return true //回来让系统知道我们仍处于活动状态并准备处理音频。
}
}
registerProcessor('get-voice-node', GetVoiceNode)
Destroy the recording instance and free the memory,if want use it the nextTime,you have better create new instance
this.recorder.stop()
this.audioDurationTimer && window.clearInterval(this.audioDurationTimer)
const audioBlob = this.recorder.getMp3Blob()
// Destroy the recording instance and free the memory
this.recorder = null
I've built a demo of a voice-assistant that takes microphone data, passes it to an analyzer, then uses .getByteFrequencyData() to show visuals. It works as follows:
Press mic button to connect to microphone input
Release mic button disconnects microphone stream, and plays MP3 of response.
When MP3 ends: return to standby, and wait for new button press to start step 1. again.
Live version here: https://dyadstudios.com/playground/daysi/
The way I've achieved this is as follows:
var audioContext = (window.AudioContext) ? new AudioContext() : new window["webkitAudioContext"]();
var analyser = audioContext.createAnalyser();
analyser.fftSize = Math.pow(2, 9); // 512
var sourceMic = undefined; // Microphone stream source
var sourceMp3 = undefined; // MP3 buffer source
// Browser requests mic access
window.navigator.mediaDevices.getUserMedia({audio: true}).then((stream) => {
sourceMic = audioContext.createMediaStreamSource(stream)
})
// 1. Mic button pressed, start listening
listen() {
audioContext.resume();
// Connect mic to analyser
if (sourceMic) {
sourceMic.connect(analyser);
}
}
// 2. Disconnect mic, play mp3
answer(mp3AudioBuffer) {
if (sourceMic) {
// Disconnect mic to prevent audio feedback
sourceMic.disconnect();
}
// Play mp3
sourceMp3 = audioContext.createBufferSource();
sourceMp3.onended = mp3StreamEnded;
sourceMp3.buffer = mp3AudioBuffer;
sourceMp3.connect(analyser);
sourceMp3.start(0);
// Connect to speakers to hear MP3
analyser.connect(audioContext.destination);
}
// 3. MP3 has ended
mp3StreamEnded() {
sourceMp3.disconnect();
// Disconnect speakers (prevents mic feedback)
analyser.disconnect();
}
It works perfectly well on Firefox and Chrome, but OSX Safari 12.1 only gets microphone data the first time I press the button. Whenever I press the mic button on a second pass, the analyzer no longer gets microphone data, but MP3 data still works. It seems like connecting, disconnecting, and re-connecting the mic's AudioNode to the analyzer breaks it somehow. I checked and Safari supports AudioNode.connect() as well as AudioNode.disconnect(). I know Safari's WebAudio implementation is a bit outdated, is there a workaround to fix this issue?
There is indeed a bug in Safari which causes it to drop the signal if a MediaStreamAudioSourceNode is disconnected for some time. You can avoid this by just not disconnecting it as long as you might need it again. You can use a GainNode instead to mute the signal.
You could do this by introducing a new variable to control the volume.
const sourceMicVolume = audioContext.createGain();
sourceMicVolume.gain.value = 0;
Then you need to connect everything right away when you instantiate the sourceMic.
sourceMic = audioContext.createMediaStreamSource(stream);
sourceMic.connect(sourceMicVolume);
sourceMicVolume.connect(analyser);
Inside your event handlers you would then only set the volume of the gain instead of (dis)connecting the nodes. Inside the listen() function that would look like this:
if (sourceMic) {
sourceMicVolume.gain.value = 1;
}
And inside the answer() function it would look like this:
if (sourceMic) {
sourceMicVolume.gain.value = 0;
}
I've been playing a lott lately with the Web Audio Api on both Firefox and Chrome. A few days ago i started to experiment with a surround set. When i was trying surround audio in the web audio api i came to notice that my example works fine in Chrome v59 but not in Firefox v54.
// create web audio api context
var audioCtx = new (window.AudioContext || window.webkitAudioContext)();
audioCtx.destination.channelInterpretation = 'discrete';
audioCtx.destination.channelCountMode = 'explicit';
audioCtx.destination.channelCount = 6;
var oscillators = [];
var merger = audioCtx.createChannelMerger(6);
console.log(audioCtx.destination, merger);
//merger.channelInterpretation = 'discrete';
//merger.channelCountMode = 'explicit';
//merger.channelCount = 6;
var addOscilator = function(channel, frequency) {
var oscillator = audioCtx.createOscillator();
oscillator.frequency.value = frequency; // value in hertz
oscillator.connect(merger,0,channel);
oscillator.start();
oscillators.push(oscillator);
};
addOscilator(0,300);
addOscilator(1,500);
addOscilator(2,700);
addOscilator(3,900);
addOscilator(4,1100);
addOscilator(5,1300);
merger.connect(audioCtx.destination);
When i do a console log of the audio context destination the maxChannelCount is 6 and the channelCount is also 6 but i still get only output on the left and right channel. these channels play all the output. (So its downmixed from 5.1 to stereo)
I also tried playing a 5.1 surround audiofile in a html-audio element in firefox and this worked fine. In other words, the browser is reconizing and able to output o the surround set.
Am i doing something wrong in this example or is this a feature not yet implemented by firefox(because the web audio api is still a draft)? I can't find any reports like this so i think its a fault on my side, but the inconsitency between the browsers makes me doubt.
Thanks in advance
After some deeper research it turned out to be a bug in firefox v54.
https://bugzilla.mozilla.org/show_bug.cgi?id=1378070
The channels will be processed but in the end down mixed to stereo. Mozilla is at the time of writing working on a fix.
Marked as resolved in firefox v57.
A simple usage of the Web Audio API:
var UnprefixedAudioContext = window.AudioContext || window.webkitAudioContext;
var context;
var volumeNode;
var soundBuffer;
context = new UnprefixedAudioContext();
volumeNode = context.createGain();
volumeNode.connect(context.destination);
volumeNode.gain.value = 1;
context.decodeAudioData(base64ToArrayBuffer(getTapWarm()), function (decodedAudioData) {
soundBuffer = decodedAudioData;
});
function play(buffer) {
var source = context.createBufferSource();
source.buffer = buffer;
source.connect(volumeNode);
(source.start || source.noteOn).call(source, 0);
};
function playClick() {
play(soundBuffer);
}
inside a UIWebView works fine (plays the sound); but when you switch to the Music app and play a song, and then come back to the app with the UIWebView the song stops playing.
The same code inside Safari doesn't have this problem.
Is there a workaround to avoid this behavior?
Here's the full fiddle:
http://jsfiddle.net/gabrielmaldi/4Lvdyhpx/
Are you on iOS? This sounds like an audio session category issue to me. iOS apps define how their audio interacts with audio. From Apple's documentation:
Each audio session category specifies a particular pattern of “yes”
and “no” for each of the following behaviors, as detailed in Table
B-1:
Interrupts non-mixable apps audio: If yes, non-mixable apps will be
interrupted when your app activates its audio session.
Silenced by the Silent switch: If yes, your audio is silenced when the
user moves the Silent switch to silent. (On iPhone, this switch is
called the Ring/Silent switch.)
Supports audio input: If yes, app audio input (recording), is allowed.
Supports audio output: If yes, app audio output (playback), is
allowed.
Looks like the default category silences audio from other apps:
AVAudioSessionCategorySoloAmbient—(Default) Playback only. Silences
audio when the user switches the Ring/Silent switch to the “silent”
position and when the screen locks. This category differs from the
AVAudioSessionCategoryAmbient category only in that it interrupts
other audio.
The key here is in the last sentence: "it interrupts other audio".
There are a number of other categories you can use depending on whether or not you want your audio silenced when the screen is locked, etc. AVAudioSessionCategoryAmbient does not silence audio.
Give this a try in the objective-c portion of your app:
NSError *setCategoryError = nil;
BOOL success = [[AVAudioSession sharedInstance]
setCategory: AVAudioSessionCategoryAmbient
error: &setCategoryError];
if (!success) { /* handle the error in setCategoryError */ }
In web audio, I can't get the ScriptProcessor node to work in Chrome, although it works fine in Firefox.
// Create audio context (Chrome/Firefox)
var context;
if (window.AudioContext) {
context = new AudioContext();
} else {
context = new webkitAudioContext();
}
// Create oscillator and start it
oscillator = context.createOscillator();
oscillator.start(0);
// Set up a script node that sets output to white noise
var myscriptnode = context.createScriptProcessor(4096, 1, 1);
myscriptnode.onaudioprocess = function(event) {
console.log('Processing buffer');
var output = event.outputBuffer.getChannelData(0);
for (i = 0; i < output.length; i++) {
output[i] = Math.random() / 10;
}
};
// Connect oscillator to script node and script node to destination
// (should output white noise)
oscillator.connect(myscriptnode);
myscriptnode.connect(context.destination);
// NOTE: This commented-out code connects oscillator directly to
// destination, which works in Chrome as well as Firefox.
//oscillator.connect(context.destination);
Expected result of this sample is that it should play white noise at 1/10 volume (the oscillator is actually ignored).
You can try this code at http://jsfiddle.net/78yKV/3/ - be aware that on Firefox this URL will play white noise straight away! On Chrome 30, it doesn't give any errors, but also doesn't give any audio output. I also checked in Chrome 31 beta but saw the same results. The 'Processing buffer' log entry never appears.
To test the general audio system, if you uncomment the last line and connect the oscillator directly to the destination, it does play audio (the oscillator tone) correctly on Chrome. But I can't get the ScriptProcessor to work on Chrome.
I searched the net for tutorials etc. with ScriptProcessor but those I found either didn't come with runnable examples or didn't work (or were too complex).
(Just to make clear - this is a stripped-down sample and doesn't relate in any way to what I'm actually trying to do, so please don't tell me that I shouldn't use a ScriptProcessor to generate white noise. That's not what it's for; I do absolutely need ScriptProcessor to work for my real usage.)
I think most likely I am doing something very stupid like I have the wrong event name or something like that, but I can't find it. Can anyone help?
I now managed to check on several other machines and I think the problem is specific to the default audio device on my machine, which is a telephone handset using the Microsoft default USB audio driver. I've reported this to Google using the menu option in Chrome; my speculation is that the problem occurs because the handset only supports mono 16 kHz output, and somehow this causes Chrome to get confused.
I can reproduce the bug on a colleague's machine which has the same make of handset. To reiterate:
Firefox works correctly on both machines when using the handset.
Both machines work correctly in Chrome when you select a different output device.
The oscillator playback works correctly in Chrome even when using the telephone handset.
Final version of test code http://jsfiddle.net/78yKV/7/
function doStuff(osc) {
// Create audio context (Chrome/Firefox)
var context;
if (window.AudioContext) {
context = new AudioContext();
} else {
context = new webkitAudioContext();
}
// Set up a script node that sets output to white noise
var myscriptnode;
if (context.createScriptProcessor) {
myscriptnode = context.createScriptProcessor(4096, 1, 1);
} else {
myscriptnode = context.createJavaScriptNode(4096, 1, 1);
}
var buffer = 1;
myscriptnode.onaudioprocess = function(event) {
console.log('Processing buffer ' + (buffer++));
var output = event.outputBuffer.getChannelData(0);
for (i = 0; i < output.length; i++) {
output[i] = Math.random() / 10;
}
};
// Connect script node to destination
if (osc) {
oscillator = context.createOscillator();
oscillator.start(0);
oscillator.connect(context.destination);
} else {
myscriptnode.connect(context.destination);
}
}
The white noise playback from this script (well actually a slightly earlier test version but I think it's the same) works in Chrome 30 on Windows 7, Windows 8.1, Linux, and Android 4.1; on Firefox on Windows; on an iPad (latest OS); and on a Mac using Safari 6.0.5 as well (it breaks if you open the developer tools there, but as long as you don't, it works). It only fails when using the USB telephone handset (Polycom CX300) mentioned.
So in other words, as apsillers suggested, this still looks like a Chrome bug, but a rather specific one. (By the way I also tried the latest 'Canary' version of Chrome but it didn't help.)