Web Audio API multiple scriptprocessor nodes - javascript

I've been searching a solution about nearly two days now for this problem.
I have a web audio api app that catches the microphone input. In one script processor i'm windowing the signal with a hanning window, which works fine when the audio chain looks like this:
source -> windowScriptProcessorNode -> audioContext.destination
Then i wanted to add another script processor to the chain like this:
source -> windowScriptProcessorNode -> otherScriptProcessorNode -> audioContext.destination
but at the inputBuffer of the otherScriptProcessorNode there are just zeros instead of the signal of windowScriptProcessorNode.
Here is some code:
var audioContext = new AudioContext();
//get microphone input via getUserMedia
navigator.getUserMedia({audio: true}, function(stream) {
//set up source
var audioSource = audioContext.createMediaStreamSource(stream);
audioSource.buffer = stream;
//set up hanning window script processor node
var windowScriptProcessorNode = audioContext.createScriptProcessor(BLOCKLENGTH,1,1);
windowScriptProcessorNode.onaudioprocess = function(e){
var windowNodeInput = e.inputBuffer.getChannelData(0);
var windowNodeOutput = e.outputBuffer.getChannelData(0);
if (windowfunction==true) {
windowNodeOutput.set(calc.applyDspWindowFunction(windowNodeInput));
}else{
windowNodeOutput.set(windowNodeInput);
}
}
//some other script processor node, just passing through the signal
var otherScriptProcessorNode = audioContext.createScriptProcessor(BLOCKLENGTH,1,1);
otherScriptProcessorNode.onaudioprocess = function(e){
var otherNodeInput = e.inputBuffer.getChannelData(0);
var otherNodeOutput = e.outputBuffer.getChannelData(0);
otherNodeOutput.set(otherNodeInput);
}
// this connnection works fine!
audioSource.connect(windowScriptProcessorNode);
windowScriptProcessorNode.connect(audioContext.destination);
/* // this connnection does NOT work
audioSource.connect(windowScriptProcessorNode);
windowScriptProcessorNode.connect(otherScriptProcessorNode);
otherScriptProcessorNode.connect(audioContext.destination);
*/
}

Related

Stream audio over websocket with low latency and no interruption

I'm working on a project which requires the ability to stream audio from a webpage to other clients. I'm already using websocket and would like to channel the data there.
My current approach uses Media Recorder, but there is a problem with sampling which causes interrupts. It registers 1s audio and then send's it to the server which relays it to other clients. Is there a way to capture a continuous audio stream and transform it to base64?
Maybe if there is a way to create a base64 audio from MediaStream without delay it would solve the problem. What do you think?
I would like to keep using websockets, I know there is webrtc.
Have you ever done something like this, is this doable?
--> Device 1
MediaStream -> MediaRecorder -> base64 -> WebSocket -> Server --> Device ..
--> Device 18
Here a demo of the current approach... you can try it here: https://jsfiddle.net/8qhvrcbz/
var sendAudio = function(b64) {
var message = 'var audio = document.createElement(\'audio\');';
message += 'audio.src = "' + b64 + '";';
message += 'audio.play().catch(console.error);';
eval(message);
console.log(b64);
}
navigator.mediaDevices.getUserMedia({
audio: true
}).then(function(stream) {
setInterval(function() {
var chunks = [];
var recorder = new MediaRecorder(stream);
recorder.ondataavailable = function(e) {
chunks.push(e.data);
};
recorder.onstop = function(e) {
var audioBlob = new Blob(chunks);
var reader = new FileReader();
reader.readAsDataURL(audioBlob);
reader.onloadend = function() {
var b64 = reader.result
b64 = b64.replace('application/octet-stream', 'audio/mpeg');
sendAudio(b64);
}
}
recorder.start();
setTimeout(function() {
recorder.stop();
}, 1050);
}, 1000);
});
Websocket is not the best. I solved by using WebRTC instead of websocket.
The solution with websocket was obtained while recording 1050ms instead of 1000, it causes a bit of overlay but still better than hearing blanks.
Although you have solved this through WebRTC, which is the industry recommended approach, I'd like to share my answer on this.
The problem here is not websockets in general but rather the MediaRecorder API. Instead of using it one can use PCM audio capture and then submit the captured array buffers into a web worker or WASM for encoding to MP3 chunks or similar.
const context = new AudioContext();
let leftChannel = [];
let rightChannel = [];
let recordingLength = null;
let bufferSize = 512;
let sampleRate = context.sampleRate;
const audioSource = context.createMediaStreamSource(audioStream);
const scriptNode = context.createScriptProcessor(bufferSize, 1, 1);
audioSource.connect(scriptNode);
scriptNode.connect(context.destination);
scriptNode.onaudioprocess = function(e) {
// Do something with the data, e.g. convert it to WAV or MP3
};
Based on my experiments this would give you "real-time" audio. My theory with the MediaRecorder API is that it does some buffering first before emitting out anything that causes the observable delay.

What is the ways to verify levels of microphone input in electron?

I am trying to develop desktop app with electron.
I am doing the task that I have to check microphone input level and run the function when there's more than certain level of input.
I have found some Github repositories however most of them requires other audio software like alsa (Linux).
So right now, Web Audio API seems like the right way to go but I don't see any related document or examples about it.
If it is possible can anyone show me the example with Web Audio API?
or just ideas can be helpful too.
If there's other way than Web Audio API that would be great too.
I figured it out by using Web Audio API analyzer!
var constraints = {audio: true};
var stream = null;
navigator.mediaDevices.getUserMedia(constraints).then(function(mediaStream){
callbStream(mediaStream);
}).catch(function(err) { console.log(err.name + ": " + err.message); });
function callbStream(mediaStream){
stream = mediaStream;
}
function getStreamData() {
if(stream != null){
var audioCtx = new (window.AudioContext || window.webkitAudioContext)();
var source = audioCtx.createMediaStreamSource(stream);
var analyser = audioCtx.createAnalyser();
source.connect(analyser);
analyser.fftSize = 32;
var dataArray = new Uint8Array(analyser.fftSize);
setTimeout(function(){
analyser.getByteTimeDomainData(dataArray);
console.log('V:' + dataArray[0] / 128.0);
}, 1000);
}
}
var animClock = setInterval(getStreamData, 1000);

Mixing Audio elements into one stream destination for use with MediaRecorder

MediaRecorder only lets you record media of one type per track. So I'm using JQuery to get a list off all audio elements and connect them to the same audio context destination in order to mix all the audio tracks into one audio stream to later be recorded by MediaRecorder. What I have so far works but only captures the first track and none of the others.
Any idea why only one track comes through?
my code:
function gettracks(stream){
var i = 0;
var audioTrack ;
var audioctx = new AudioContext();
var SourceNode = [];
var dest = audioctx.createMediaStreamDestination();
$('audio').each( function() {
//the audio element id
var afid = $(this).attr('id');
var audio = $('#'+afid)[0];
console.log('audio id '+ afid+' Audio= '+audio);
SourceNode[i] = audioctx.createMediaElementSource(audio);
//dont forget to connect the wires!
SourceNode[i].connect(audioctx.destination);
SourceNode[i].connect(dest);
audioTrack = dest.stream.getAudioTracks()[0];
stream.addTrack(audioTrack);
i++;
});
}
//from a mousedown event I call
stream = canvas.captureStream();
video.srcObject = stream;
gettracks(stream);
startRecording()
function startRecording() {
recorder = new MediaRecorder(stream, {
mimeType: 'video/webm'
});
recorder.start();
}
I would do it like this:
var ac = new AudioContext();
var mediaStreamDestination = new MediaStreamAudioDestinationNode(ac);
document.querySelectorAll("audio").forEach((e) => {
var mediaElementSource = new MediaElementAudioSourceNode(ac, { mediaElement: e });
mediaElementSource.connect(mediaStreamDestination);
});
console.log(mediaStreamDestination.stream.getAudioTracks()[0]); // give this to MediaRecorder
Breaking down what the above does:
var ac = new AudioContext();: create an AudioContext, to be able to route audio to something else than the default audio output.
var mediaStreamDestination = new MediaStreamAudioDestinationNode(ac); from this AudioContext, get a special type of DestinationNode, that, instead of send the output of the AudioContext to the audio output device, sends it to a MediaStream that holds a single track of audio.
document.querySelectorAll("audio").forEach((e) => {, get all the <audio> elements, and iterate over them.
var mediaElementSource = new MediaElementAudioSourceNode(ac, { mediaElement: e });, for each of those media element, capture its output and route it to the AudioContext. This gives you an AudioNode.
mediaElementSource.connect(mediaStreamDestination);, connect our AudioNode that has the output of the media element, connect it to our destination that goes to a MediaStream.
mediaStreamDestination.stream.getAudioTracks()[0] get the first audio MediaStreamTrack from this MediaStream. It has only one anyways.
Now, I suppose you can do something like stream.addTrack(mediaStreamDestination.stream.getAudioTracks()[0]), passing in the audio track above.
What if you create a gain node and connect your source nodes to that:
var gain = audioctx.createGain();
gain.connect(dest);
and in the loop
SourceNode[i].connect(gain);
Then your sources flow into a single gain node, which flows to the your destination.

Recording browser audio using navigator.mediaDevices.getUserMedia

I am recording browser audio input from the microphone, and sending it via websocket to a nodeJs service that writes the stream to a .wav file.
My problem is that the first recording comes out fine, but any subsequent recordings come out sounding very slow, about half the speed and are therefore unusable.
If I refresh the browser the first recording works again, and subsequent recordings are slowed down which is why I am sure the problem is not in the nodeJs service.
My project is an Angular 5 project.
I have pasted the code I am trying below.
I am using binary.js ->
https://cdn.jsdelivr.net/binaryjs/0.2.1/binary.min.js
this.client = BinaryClient(`ws://localhost:9001`)
createStream() {
window.Stream = this.client.createStream();
window.navigator.mediaDevices.getUserMedia({ audio: true }).then(stream => {
this.success(stream);
})
}
stopRecording() {
this.recording = false;
this.win.Stream.end();
}
success(e) {
var audioContext = window.AudioContext || window.webkitAudioContext;
var context = new audioContext();
// the sample rate is in context.sampleRate
var audioInput = context.createMediaStreamSource(e);
var bufferSize = 2048;
var recorder = context.createScriptProcessor(bufferSize, 1, 1);
}
recorder.onaudioprocess = (e) => {
if (!this.recording) return;
console.log('recording');
var left = e.inputBuffer.getChannelData(0);
this.win.Stream.write(this.convertoFloat32ToInt16(left));
}
audioInput.connect(recorder)
recorder.connect(context.destination);
}
convertoFloat32ToInt16(buffer) {
var l = buffer.length;
var buf = new Int16Array(l)
while (l--) {
buf[l] = buffer[l] * 0xFFFF; //convert to 16 bit
}
return buf.buffer
}
I am stumped as to what can be going wrong so if anyone has experience using this browser tech I would appreciate any help.
Thanks.
I've had this exact problem - your problem is the sample rate you are writing your WAV file with is incorrect.
You need to pass the sample rate used by the browser and the microphone to the node.js which writes the binary WAV file.
Client side:
After a successfull navigator.mediaDevices.getUserMedia (in your case, success function), get the sampleRate variable from the AudioContext element:
var _smapleRate = context.sampleRate;
Then pass it to the node.js listener as a parameter. In my case I used:
binaryClient.createStream({ SampleRate: _smapleRate });
Server (Node.js) side:
Use the passed SampleRate to set the WAV file's sample rate. In my case this is the code:
fileWriter = new wav.FileWriter(wavPath, {
channels: 1,
sampleRate: meta.SampleRate,
bitDepth: 16
});
This will prevent broken sounds, low pitch sounds, low or fast WAV files.
Hope this helps.

How can i publish audio by web socket

I am developing an application which publish audio stream from mic through web sockets i am not able to play web socket response in audio control or can anyone tell how to play audio buffer in audio control please help me out?
I use the following code to play the sounds created with a software-synth.
The samples need to be in the range [-1.0 .. 1.0]. You should initialize context in the page init function.
var context = new webkitAudioContext();
function playSound(buffer, freq, vol) // buffer, sampleRate, 0-100
{
var mBuffer = context.createBuffer(1, buffer.length, freq);
var dataBuffer = mBuffer.getChannelData(0);
var soundBuffer = buffer;
var i, n = buffer.length;
for (i=0;i<n;i++)
dataBuffer[i] = soundBuffer[i];
var node = context.createBufferSource();
node.buffer = mBuffer;
node.gain.value = 0.5 * vol/100.0;
node.connect(context.destination);
node.noteOn(0);
}

Categories