I am attempting to use a ChannelSplitter node to send an audio signal into both a ChannelMerger node and to the destination, and then trying to use the ChannelMerger node to merge two different audio signals (one from the split source, one from the microphone using getUserMedia) into a recorder using Recorder.js.
I keep getting the following error: "Uncaught SyntaxError: An invalid or illegal string was specified."
The error is at the following line of code:
audioSource.splitter.connect(merger);
Where audioSource is an instance of ThreeAudio.Source from the library ThreeAudio.js, splitter is a channel splitter I instantiated myself by modifying the prototype, and merger is my merger node. The code that precedes it is:
merger = context.createChannelMerger(2);
userInput.connect(merger);
Where userInput is the stream from the user's microphone. That one connects without throwing an error. Sound is getting from the audioSource to the destination (I can hear it), so it doesn't seem like the splitter is necessarily wrong - I just can't seem to connect it.
Does anyone have any insight?
I was struggling to understand the ChannelSplitterNode and ChannelMergerNode API. Finally I find the missing part, the 2nd and 3rd optional parameters of the connect() method - input and output channel.
connect(destinationNode: AudioNode, output?: number, input?: number): AudioNode;
When using the connect() method with Splitter or Merger nodes, spacify the input/output channel. This is how you split and Merge to audio data.
You can see in this example how I load audio data, split it into 2 channels, and control the left/right output. Notice the 2nd and 3rd parameter of the connect() method:
const audioUrl = "https://s3-us-west-2.amazonaws.com/s.cdpn.io/858/outfoxing.mp3";
const audioElement = new Audio(audioUrl);
audioElement.crossOrigin = "anonymous"; // cross-origin - if file is stored on remote server
const audioContext = new AudioContext();
const audioSource = audioContext.createMediaElementSource(audioElement);
const volumeNodeL = new GainNode(audioContext);
const volumeNodeR = new GainNode(audioContext);
volumeNodeL.gain.value = 2;
volumeNodeR.gain.value = 2;
const channelsCount = 2; // or read from: 'audioSource.channelCount'
const splitterNode = new ChannelSplitterNode(audioContext, { numberOfOutputs: channelsCount });
const mergerNode = new ChannelMergerNode(audioContext, { numberOfInputs: channelsCount });
audioSource.connect(splitterNode);
splitterNode.connect(volumeNodeL, 0); // connect OUTPUT channel 0
splitterNode.connect(volumeNodeR, 1); // connect OUTPUT channel 1
volumeNodeL.connect(mergerNode, 0, 0); // connect INPUT channel 0
volumeNodeR.connect(mergerNode, 0, 1); // connect INPUT channel 1
mergerNode.connect(audioContext.destination);
let isPlaying;
function playPause() {
// check if context is in suspended state (autoplay policy)
if (audioContext.state === 'suspended') {
audioContext.resume();
}
isPlaying = !isPlaying;
if (isPlaying) {
audioElement.play();
} else {
audioElement.pause();
}
}
function setBalance(val) {
volumeNodeL.gain.value = 1 - val;
volumeNodeR.gain.value = 1 + val;
}
<h3>Try using headphones</h3>
<button onclick="playPause()">play/pause</button>
<br><br>
<button onclick="setBalance(-1)">Left</button>
<button onclick="setBalance(0)">Center</button>
<button onclick="setBalance(+1)">Right</button>
P.S: The audio track isn't a real stereo track, but a left and right copy of the same Mono playback. You can try this example with a real stereo playback for a real balance effect.
Here's some working splitter/merger code that creates a ping-pong delay - that is, it sets up separate delays on the L and R channels of a stereo signal, and crosses over the feedback. This is from my input effects demo on webaudiodemos.appspot.com (code on github).
var merger = context.createChannelMerger(2);
var leftDelay = context.createDelayNode();
var rightDelay = context.createDelayNode();
var leftFeedback = audioContext.createGainNode();
var rightFeedback = audioContext.createGainNode();
var splitter = context.createChannelSplitter(2);
// Split the stereo signal.
splitter.connect( leftDelay, 0 );
// If the signal is dual copies of a mono signal, we don't want the right channel -
// it will just sound like a mono delay. If it was a real stereo signal, we do want
// it to just mirror the channels.
if (isTrueStereo)
splitter.connect( rightDelay, 1 );
leftDelay.delayTime.value = delayTime;
rightDelay.delayTime.value = delayTime;
leftFeedback.gain.value = feedback;
rightFeedback.gain.value = feedback;
// Connect the routing - left bounces to right, right bounces to left.
leftDelay.connect(leftFeedback);
leftFeedback.connect(rightDelay);
rightDelay.connect(rightFeedback);
rightFeedback.connect(leftDelay);
// Re-merge the two delay channels into stereo L/R
leftFeedback.connect(merger, 0, 0);
rightFeedback.connect(merger, 0, 1);
// Now connect your input to "splitter", and connect "merger" to your output destination.
Related
Using the javascript AudioContext interface I want to create an Audiostream that is playing a dynamically created 1-second long waveform continuously. That waveform is supposed to be updated when I change a slider on the html page etc.
So basically I want to feed in a vector containing 44100 floats that represents that 1-second long waveform.
So far I have
const audio = new AudioContext({
latencyHint: "interactive",
sampleRate: 44100,
});
but I am not sure how to apply that vector/list/data structure with my actual waveform.
Hint: I want to add audio to this PyScript example.
This example might help you, it generates a random float array and plays that. I think. You can click "go" multiple times to generate a new wave.
WARNING: BE CAREFUL WITH THE VOLUME IF YOU RUN THIS. IT CAN BE DANGEROUSLY LOUD!
function makeWave(audioContext) {
const floats = []
for (let i = 44000;i--;) floats.push(Math.random()*2 - 1)
console.log("New waveform done")
const sineTerms = new Float32Array(floats)
const cosineTerms = new Float32Array(sineTerms.length)
const customWaveform = audioContext.createPeriodicWave(cosineTerms, sineTerms)
return customWaveform
}
let audioCtx, oscillator, gain, started = false
document.querySelector("#on").addEventListener("click", () => {
// Initialize only once
if (!gain) {
audioCtx = new AudioContext() // Controls speakers
gain = audioCtx.createGain() // Controls volume
oscillator = audioCtx.createOscillator() // Controls frequency
gain.connect(audioCtx.destination) // connect gain node to speakers
oscillator.connect(gain) // connect oscillator to gain
oscillator.start()
}
const customWaveform = makeWave(audioCtx)
oscillator.setPeriodicWave(customWaveform)
gain.gain.value = 0.02 // ☠🕱☠ CAREFUL WITH VOLUME ☠🕱☠
})
document.querySelector("#off").addEventListener("click", () => {
gain.gain.value = 0
})
<b>SET VOLUME VERY LOW BEFORE TEST</b>
<button id="on">GO</button>
<button id="off">STOP</button>
Here's what I want to do:
Send microphone audio to AudioWorketProcessor (works)
Send results from AudioWorkletProcessor to a server using WebSockets (works)
Receive back the data over WebSockets (works)
Send data to the computer's speakers (how to do it?)
Everything works except I don't know how to implement #4. Here's what I have, some things simplified to keep focus on the problem:
//1. The code to set up the audio context. Connects the microphone to the worklet
const audioContext = new AudioContext({ sampleRate: 8000 });
audioContext.audioWorklet.addModule('/common/recorderworkletprocess.js').then(
function () {
const recorder = new AudioWorkletNode(audioContext, 'recorder-worklet');
let constraints = { audio: true };
navigator.mediaDevices.getUserMedia(constraints).then(function (stream) {
const microphone = audioContext.createMediaStreamSource(stream);
microphone.connect(recorder);
recorder.connect(audioContext.destination);
});
}
);
//2. The AudioWorkletProcessor. Sends audio to the WebSocket, which sends it to the server as binary:
class RecorderWorkletProcessor extends AudioWorkletProcessor {
constructor() {
super();
}
process(inputs) {
const inputChannel = inputs[0][0]; //inputChannel Float32Array(128)
socket.send(inputChannel); // sent as byte[512]
return true;
}
}
registerProcessor('recorder-worklet', RecorderWorkletProcessor);
//3 and 4. Finally, the server sends back the data exactly as it was received. The WebSocket converts it to ArrayBuffer(512). Here I want to do whatever it takes to output it to the computer's speaker as audio:
socket.messageReceived = function (evt) {
// evt.data contains an ArrayBuffer with length of 512
// I want this to be played on the computer's speakers. How to do this?
}
Any guidance would be appreciated.
Ok, I believe I can answer my question. This is not robust but it provides what I needed to know.
socket.messageReceived = function (evt) {
// evt.data contains an ArrayBuffer with length of 512
// I want this to be played on the computer's speakers. How to do this?
let fArr = new Float32Array(evt.data);
let.buf = audioContext.createBuffer(1, 128, 8000);
buf.copyToChannel(fArr, 0);
let player = audioContext.createBufferSource();
player.buffer = buf;
player.connect(audioContext.destination);
player.start(0);
}
I am receiving an arrayBuffer via a socket.io event and want to be able to process and play the stream as an audio file.
I am receiving the buffer like so:
retrieveAudioStream = () => {
this.socket.on('stream', (arrayBuffer) => {
console.log('arrayBuffer', arrayBuffer)
})
}
Is it possible to set the src attribute of an <audio/> element to a buffer? If not how can I play the the incoming buffer stream?
edit:
To show how I am getting my audio input and streaming it:
window.navigator.getUserMedia(constraints, this.initializeRecorder, this.handleError);
initializeRecorder = (stream) => {
const audioContext = window.AudioContext;
const context = new audioContext();
const audioInput = context.createMediaStreamSource(stream);
const bufferSize = 2048;
// create a javascript node
const recorder = context.createScriptProcessor(bufferSize, 1, 1);
// specify the processing function
recorder.onaudioprocess = this.recorderProcess;
// connect stream to our recorder
audioInput.connect(recorder);
// connect our recorder to the previous destination
recorder.connect(context.destination);
}
This is where I receive the inputBuffer event and stream via a socket.io event
recorderProcess = (e) => {
const left = e.inputBuffer.getChannelData(0);
this.socket.emit('stream', this.convertFloat32ToInt16(left))
}
EDIT 2:
Adding Raymonds suggestion:
retrieveAudioStream = () => {
const audioContext = new window.AudioContext();
this.socket.on('stream', (buffer) => {
const b = audioContext.createBuffer(1, buffer.length, audioContext.sampleRate);
b.copyToChannel(buffer, 0, 0)
const s = audioContext.createBufferSource();
s.buffer = b
})
}
Getting error: NotSupportedError: Failed to execute 'createBuffer' on 'BaseAudioContext': The number of frames provided (0) is less than or equal to the minimum bound (0).
Based on a quick read of what initializeRecorder and recorderProcess do, it looks like you're converting the float32 samples to int16 in some say and that gets sent to retrieveAudioStream in some way.
If this is correct, then the arrayBuffer is an array of int16 values. Convert them to float32 (most likely by dividing each value by 32768) and save them in a Float32Array. Then create an AudioBuffer of the same lenght and copyToChannel(float32Array, 0, 0) to write the values to the AudioBuffer. Use an AudioBufferSourceNode with this buffer to play out the audio.
I am recording browser audio input from the microphone, and sending it via websocket to a nodeJs service that writes the stream to a .wav file.
My problem is that the first recording comes out fine, but any subsequent recordings come out sounding very slow, about half the speed and are therefore unusable.
If I refresh the browser the first recording works again, and subsequent recordings are slowed down which is why I am sure the problem is not in the nodeJs service.
My project is an Angular 5 project.
I have pasted the code I am trying below.
I am using binary.js ->
https://cdn.jsdelivr.net/binaryjs/0.2.1/binary.min.js
this.client = BinaryClient(`ws://localhost:9001`)
createStream() {
window.Stream = this.client.createStream();
window.navigator.mediaDevices.getUserMedia({ audio: true }).then(stream => {
this.success(stream);
})
}
stopRecording() {
this.recording = false;
this.win.Stream.end();
}
success(e) {
var audioContext = window.AudioContext || window.webkitAudioContext;
var context = new audioContext();
// the sample rate is in context.sampleRate
var audioInput = context.createMediaStreamSource(e);
var bufferSize = 2048;
var recorder = context.createScriptProcessor(bufferSize, 1, 1);
}
recorder.onaudioprocess = (e) => {
if (!this.recording) return;
console.log('recording');
var left = e.inputBuffer.getChannelData(0);
this.win.Stream.write(this.convertoFloat32ToInt16(left));
}
audioInput.connect(recorder)
recorder.connect(context.destination);
}
convertoFloat32ToInt16(buffer) {
var l = buffer.length;
var buf = new Int16Array(l)
while (l--) {
buf[l] = buffer[l] * 0xFFFF; //convert to 16 bit
}
return buf.buffer
}
I am stumped as to what can be going wrong so if anyone has experience using this browser tech I would appreciate any help.
Thanks.
I've had this exact problem - your problem is the sample rate you are writing your WAV file with is incorrect.
You need to pass the sample rate used by the browser and the microphone to the node.js which writes the binary WAV file.
Client side:
After a successfull navigator.mediaDevices.getUserMedia (in your case, success function), get the sampleRate variable from the AudioContext element:
var _smapleRate = context.sampleRate;
Then pass it to the node.js listener as a parameter. In my case I used:
binaryClient.createStream({ SampleRate: _smapleRate });
Server (Node.js) side:
Use the passed SampleRate to set the WAV file's sample rate. In my case this is the code:
fileWriter = new wav.FileWriter(wavPath, {
channels: 1,
sampleRate: meta.SampleRate,
bitDepth: 16
});
This will prevent broken sounds, low pitch sounds, low or fast WAV files.
Hope this helps.
I try to separate stereo microphone channels with following javascript and Web Audio API.. but it is not working as I expected.
The input stereo microphone is designed to separate telephone conversation in ch0(speaking) and ch1(receiving). This stereo microphone is working fine with QuickTime audio recording. speaking and receiving voice are separated to L and R. With Firefox, The speaking and receiving voice are mixed on splitter's channel0. and nothing on channel1. with Chrome, The speaking and receiving voice are mixed and appears on both channel0 and channel1 of splitter. My goal is to separate the speaking voice on channel0 and receiving voice on channel1.
the function is callbacked when Microphone permission granted in navigator.getUserMedia({audio: true}.
Does anyone have advise?
Microphone.prototype.onMediaStream = function(stream) {
var audioTrack=stream.getAudioTracks();
var audioStreamTrackName = audioTrack[0].label;
var audioStreamTrackType = audioTrack[0].kind;
console.log('onMediaStream: getAudioTracks:[0] '+ audioStreamTrackName +' '+ audioStreamTrackType);
var AudioCtx = window.AudioContext || window.webkitAudioContext;
var maxdb =-10.0, mindb=-80.0, fft=2048, smoothing =0.8,maxVol=1.0,minVol=0.0;
if (!AudioCtx)
throw new Error('AudioContext not available');
if (!this.audioContext)
this.audioContext = new AudioCtx();
// create two analyser
this.analyser0 = this.audioContext.createAnalyser();
this.analyser0.fftSize = fft;
this.analyser0.minDecibels=mindb;
this.analyser0.maxDecibels=maxdb;
this.analyser0.smoothingTimeConstant=smoothing;
this.analyser1 = this.audioContext.createAnalyser();
this.analyser1.fftSize = fft;
this.analyser1.minDecibels=mindb;
this.analyser1.maxDecibels=maxdb;
this.analyser1.smoothingTimeConstant=smoothing;
this.splitter = this.audioContext.createChannelSplitter(2); //2ch splitter Stereo to 2 monoral stream
this.merger = this.audioContext.createChannelMerger(2); //2ch marger two monoral to stereo
this.gainCh0 = this.audioContext.createGain();
this.gainCh1 = this.audioContext.createGain();
this.gainCh0.gain.value = maxVol; // max
this.gainCh1.gain.value = maxVol; // max
if (!this.mic) {
this.mic = this.audioContext.createScriptProcessor(8192,2,2);
}
this.mic.onaudioprocess = this._onaudioprocess.bind(this);
this.stream = stream;
this.audioInput = this.audioContext.createMediaStreamSource(stream);
// audio-nodes connections
this.audioInput.connect(this.splitter);
this.splitter.connect(this.gainCh0, 0); // connect splitter output channel0 to gainCh0
this.splitter.connect(this.gainCh1, 1); // connect splitter output channel1 to gainCh
this.gainCh0.connect(this.merger, 0, 0); //connect gainCh0 output channel0 to merger input channel0
this.gainCh1.connect(this.merger, 0, 1); //connect gainCh1 output channel0 to merger input channel1
this.merger.connect(this.mic); // connect merger to ScriptProcessor
this.mic.connect(this.audioContext.destination);
this.gainCh0.connect(this.analyser0);
this.gainCh1.connect(this.analyser1);
// start recording
this.onStartRecording();
};
firefox web audio debug tool output