Combine setSinkId with stereoPanner? - javascript

I'm writing an Electron app to deliver separate audio streams to 10 audio channels, using a Focusrite Scarlett 18i20 USB sound card. Windows 10 splits the outputs into the following stereo outputs:
Focusrite 1 + 2
Focusrite 3 + 4
Focusrite 5 + 6
Focusrite 7 + 8
Focusrite 9 + 10
Because of this, I need the app to send the audio to a specific output, as well as splitting the stereo channels. Example: To deliver audio to the 3rd output, I need to send it to "Focusrite 3 + 4" on the left channel. Unfortunately, I cannot seem to be able to do both at the same time.
I start with an audio object:
let audio = new Audio("https://test.com/test.mp3");
I do the following to get the sinkIds for the outputs:
let devices = await navigator.mediaDevices.enumerateDevices();
devices = devices.filter(device => device.kind === 'audiooutput');
The following works for making the audio output to a specific sinkId:
audio.setSinkId(sinkId).then(() => {
audio.play();
}
Works: I do the following to play only the left channel:
let audioContext = new AudioContext();
let source = audioContext.createMediaElementSource(audio);
let panner = audioContext.createStereoPanner();
panner.pan.value = -1;
source.connect(panner);
panner.connect(audioContext.destination);
So far everything is fine. But when I try to combine these, the sinkId is ignored, and the audio is being sent to the default audio output. I have tried several approaches, including this one:
audio.setSinkId(sinkId).then(() => {
let audioContext = new AudioContext();
let source = audioContext.createMediaElementSource(audio);
let panner = audioContext.createStereoPanner();
panner.pan.value = -1;
source.connect(panner);
panner.connect(audioContext.destination);
}
I have also tried an approach using audioContext.createChannelMerger instead of the stereoPanner. This works perfectly on its own, but not combined with setSinkId. I get the same behavior on Windows 10 and Mac.
Any ideas?

To route the audio output of an AudioContext to a specific output device you would need to use a MediaStreamAudioDestinationNode in combination with another audio element. The sinkId of your existing audio element doesn't have any effect anymore once it gets routed into the AudioContext.
The desired signal flow would somehow look like this:
audio
↓
MediaElementAudioSourceNode
↓
StereoPannerNode
↓
MediaStreamAudioDestinationNode
↓
outputAudio
Your AudioContext code would then need to be changed to route everything to a MediaStreamAudioDestinationNode instead of the default destination.
let audioContext = new AudioContext();
let source = audioContext.createMediaElementSource(audio);
let panner = audioContext.createStereoPanner();
let destination = audioContext.createMediaStreamDestination();
panner.pan.value = -1;
source.connect(panner).connect(destination);
The last part is to route the stream of the MediaStreamAudioDestinationNode to a newly created audio element which can then be used to set the sinkId.
const outputAudio = new Audio();
outputAudio.srcObject = destination.stream;
outputAudio.setSinkId(sinkId);
outputAudio.play();
Please note that using an audio element just for setting the sinkId is a bit of a hack. There are plans to make setting the sinkId a feature of the AudioContext.
https://github.com/WebAudio/web-audio-api-v2/issues/10

Related

javascript new audioContext mono to Stereo not wordking

I need to convert the mp3 files I play in a project from mono to stereo with the web audio api. But do this a1 = new Audio(/1.mp3); I can't with. My entire system is based on this build. Converting all the sounds playing on the page to stereo or new Audio(/1.mp3); Is there a way to convert a sound created with .
var a1 = new Audio(`/1.mp3`);
a1.volume = .5;
a1.play()
I am using a simple code structure as above.
https://stackoverflow.com/a/54990085/15929287
I couldn't adapt the above answer for myself. In no way can I convert the sound I created with new audio() to stereo. In the example in the link, the oscillator is also added. I'm just trying to do something where I can adjust the mono/stereo setting. I need your help.
The linked answer assumes the mono audio source is an oscillator created with the Web Audio API but it is also possible to use an audio element as the source by using a MediaElementAudioSourceNode.
The audio graph would then need to be wired up like this:
const audio = new Audio('/1.mp3');
const audioCtx = new AudioContext();
const source = audioCtx.createMediaElementSource(audio);
const gainNodeL = audioCtx.createGain();
const gainNodeR = audioCtx.createGain();
const merger = audioCtx.createChannelMerger(2);
source.connect(gainNodeL);
source.connect(gainNodeR);
gainNodeL.connect(merger, 0, 0);
gainNodeR.connect(merger, 0, 1);
merger.connect(audioCtx.destination);
Please note that it's probably still necessary to call resume() on the AudioContext in response to a user gesture to make sure the AudioContext is running.

Possibility to record playback of browser audio element using JavaScript?

I am currently working on a simple audio project which uses HTML / JS audio elements to play audio using buttons.
Is it somehow possible to record that audio being played instead of recording the microphone (by using the MediaRecorder API)?
After following the steps provided by lawrence-witt, I figured it out. Here an example for everyone trying to achieve the same:
const audioContext = new AudioContext();
const destination = audioContext.createMediaStreamDestination();
const mediaRecorder = new MediaRecorder(destination.stream);
// array of 'new Audio()' elements
audioArray.forEach(audio => {
let stream = audioContext.createMediaStreamSource(audio.captureStream());
stream.connect(destination);
})
mediaRecorder.start();

Adding panner / spacial audio to Web Audio Context from a WebRTC stream not working

I would like to create a Web Audio panner to position the sound from a WebRTC stream.
I have the stream connecting OK and can hear the audio and see the video, but the panner does not have any effect on the audio (changing panner.setPosition(10000, 0, 0) to + or - 10000 makes no difference to the sound).
This is the onaddstream function where the audio and video get piped into a video element and where I presume i need to add the panner.
There are no errors, it just isn't panning at all.
What am I doing wrong?
Thanks!
peer_connection.onaddstream = function(event) {
var AudioContext = window.AudioContext || window.webkitAudioContext;
var audioCtx = new AudioContext();
audioCtx.listener.setOrientation(0,0,-1,0,1,0)
var panner = audioCtx.createPanner();
panner.panningModel = 'HRTF';
panner.distanceModel = 'inverse';
panner.refDistance = 1;
panner.maxDistance = 10000;
panner.rolloffFactor = 1;
panner.coneInnerAngle = 360;
panner.coneOuterAngle = 0;
panner.coneOuterGain = 0;
panner.setPosition(10000, 0, 0); //this doesn't do anything
peerInput.connect(panner);
panner.connect(audioCtx.destination);
// attach the stream to the document element
var remote_media = USE_VIDEO ? $("<video>") : $("<audio>");
remote_media.attr("autoplay", "autoplay");
if (MUTE_AUDIO_BY_DEFAULT) {
remote_media.attr("muted", "false");
}
remote_media.attr("controls", "");
peer_media_elements[peer_id] = remote_media;
$('body').append(remote_media);
attachMediaStream(remote_media[0], event.stream);
}
Try to get the event stream before setting the panner
var source = audioCtx.createMediaStreamSource(event.stream);
Reference: Mozilla Developer Network - AudioContext
CreatePaneer Refernce: Mozilla Developer Network - createPanner
3rd Party Library: wavesurfer.js
Remove all the options you've set for the panner node and see if that helps. (The cone angles seem a little funny to me, but I always forget how they work.)
If that doesn't work, create a smaller test with the panner but use a simple oscillator as the input. Play around with the parameters and positions to make sure it does what you want.
Put this back into your app. Things should work then.
Figured this out for myself.
The problems was not the code, it was because I was connected with Bluetooth audio.
Bluetooth apparently can only do stereo audio with the microphone turned off. As soon as you activate the mic, that steals one of the channels and audio output downgrades to mono.
If you have mono audio, you definitely cannot do 3D positioned sound, hence me thinking the code was not working.

WebRTC add audio from AudioContext to captureStream

I have 3 AudioContexts in my document and a canvas that is rendering random things. I'm trying to capture the audio that is playing in the AudioContext and adding it to the canvasStream so the audio + video is being send over webrtc.
Now the code looks like this:
here I create the media streams
this.mediaStreams = [];
window.activeAudioContexts.forEach(context=>{
const gainNode = context.createGain();
gainNode.gain.value = 1;
const destination = context.createMediaStreamDestination();
gainNode.connect(destination);
this.mediaStreams.push(destination.stream);
});
Here i create the canvasStream and add the audio tracks:
const stream = this.targetCanvas.captureStream(30)
this.mediaStreams.forEach((audioStream) ={
stream.addTrack( audioStream.getAudioTracks()[0] );
});
broadCaster.attachStream(stream);
On the receiving client I can see the video and I can see the MediaStream has 3 Audio Tracks, however no audio is being played in the video.
Any ideas where this is going wrong?
Thanks for pointing me in the right direction!
Cheers,
Erik

Clicking sounds in Stream played with Web Audio Api

I have a strange Problem. I'm using Web Audio to play a stream from the server. I do that the following way:
var d2 = new DataView(evt.data);
var data = new Float32Array(d2.byteLength / Float32Array.BYTES_PER_ELEMENT);
for (var jj = 0; jj < data.length; ++jj) {
data[jj] = d2.getFloat32(jj * Float32Array.BYTES_PER_ELEMENT, true);
}
var buffer = context.createBuffer(1, data.length, 44100);
buffer.getChannelData(0).set(data);
source = context.createBufferSource();
source.buffer = buffer;
source.start(startTime);
source.connect(context.destination);
startTime += buffer.duration;
This works fine.
If i play the stream on my Computer i don't have any problems.
If i play the same stream on my Windows 8 tablet (same Chrome version) i have a lot of clicking sounds in the audio. There are multiple of them within one second.
It kinda seams that on the end of each buffer i hear a click.
I don't understand the difference... The only difference i could find was that the samplingrate of the soundcard on my computer is 44100 and on the tablet it's 48000.
The transmitted stream is in 44100 and i don't have any samplerate problems. just the clicking sounds.
Does anybody have an idea why this is happening?
Thank you,
metabolic
AudioBufferSourceNode resample their buffers to the AudioContext samplerate. As you can imagine, the API does not allow you to keep the resampler state between one AudioBufferSourceNode and the other, so there is a discontinuity between the two buffers.
I think the easiest way is to provide a stream at the sample-rate of the device, by resampling server-side. When the AudioWorkerNode will be ready and implemented, you'll be able to fix this yourself as well client side, but it's not.
Alternatively also you can just stream using an element, and pipe that to Web Audio API using AudioContext.createMediaElementSource().
I had the same issue, thanks to Padenot's answer I checked the sample rates. AudioContext.sampleRate defaulted to 44100, but the PCM data and AudioBuffer was 48000. Initialising the AudioContext with a matching sampleRate solved the problem:
var AudioContext = window.AudioContext || window.webkitAudioContext;
var audioCtx = new AudioContext({
latencyHint: 'interactive',
sampleRate: 48000,
});
With this, I can schedule the playback of 20ms 48khz PCM16 AudioBuffers back-to-back without any clicks or distortion.

Categories