I have a computer with multiple audio out ports on the sound card. Can I use the web audio api or something to play different audio source to different output ports?
For example, in a browser, load 3 tracks and send each track to a different audio output port on the sound card.
Hmm. The channel splitter seems to have more fine grained control of audio channel. I’m not sure if that is what I would need though. One multi channel audio source to multiple monos?
https://developer.mozilla.org/en-US/docs/Web/API/ChannelSplitterNode
Related
I'm using twilio-video.js to create a WebRTC video connection between two clients. To the basic camera and microphone tracks, I add an additional sound stream, created from loading a music file and getting a MediaStream out of it through Web Audio API. It's then turned into Twilio's LocalAudioTrack and streamed to the other peer.
The problem is, even though everything's working correctly on both sides, the receiving peer experiences slight (but unacceptable for my use-case) resynchronization between the microphone & camera vs additional sound. It's sometimes ahead and sometimes behind the microphone & camera input when compared to what the streaming peer is experiencing.
I sadly cannot share any code, but I hope I described my issue clearly.
For instance, if I have a stereo audio interface/sound card, could I get the left and right channels separately into javascript for realtime-ish audio processing of stereo audio?
I think it's actually not possible to get stereo audio into Chrome. According to the following URL: https://bugs.chromium.org/p/chromium/issues/detail?id=453876
Unfortunately we don't support [...] multiple devices at the moment.
I need to capture multiple users' webcams in the browser and then broadcast it to multiple users. This must include an audio and video stream with low and high quality options. Ideally, I would not like to use flash but having flash as a fallback is totally fine.
When the webcam is broadcast to the users, this must work on as many platforms as possible.
I would like create a mixer effect with web Audio Api.
Let out a sound from speaker and a different sound from the headphone.
The only thing you can connect to is the destination property of the AudioContext, and from there it's up to the audio hardware to where it will output. So in other terms, unfortunately no, you can't control which output the sound will play on from within Web Audio.
Is it possible to access system audio using the Web Audio API, in order to visualize or apply an equalizer to it? It looks like it's possible to hook up system audio to an input device that Web Audio API can access (i.e. Web Audio API, get the output from the soundcard); however ideally I would like to be able to process all sound output without making any local configuration changes.
No, this isn't possible. The closest you could get is installing a loopback audio device on the user's systems, like Soundflower on OSX, and use that as the default audio output device. It's not possible without local config changes.