Locked. There are disputes about this question’s content being resolved at this time. It is not currently accepting new answers or interactions.
I have created a Web Audio API Biquad filters (Lowpass, Highpass etc) using JavaScript. The application works (I think....) well, it's displaying on the canvas without errors so i'm guessing it does. Anyway, I'm not a pro at JavaScript, far from it. I showed someone a small snippet of my code and they said it was very messy and that i'm not building my audio graph properly for example, not connecting all of the nodes from start to finish.
Now I know that the Source connects to Gain. Gain connects to Filter. Filter connects to Destination. I tried to look at it but I can't figure out what's wrong and how to fix it.
JavaScript
// Play the sound.
function playSound(buffer) {
aSoundSource = audioContext.createBufferSource(); // creates a sound source.
aSoundSource.buffer = buffer; // tell the source which sound to play.
aSoundSource.connect(analyser); // Connect the source to the analyser.
analyser.connect(audioContext.destination); // Connect the analyser to the context's destination (the speakers).
aSoundSource.start(0); // play the source now.
//Create Filter
var filter = audioContext.createBiquadFilter();
//Create the audio graph
aSoundSource.connect(filter);
//Set the gain node
gainNode = audioContext.createGain();
aSoundSource.connect(gainNode); //Connect the source to the gain node
gainNode.connect(audioContext.destination);
//Set the current volume
var volume = document.getElementById('volume').value;
gainNode.gain.value = volume;
//Create and specify parameters for Low-Pass Filter
filter.type = "lowpass"; //Low pass filter
filter.frequency.value = 440;
//End Filter
//Connect source to destination(speaker)
filter.connect(audioContext.destination);
//Set the playing flag
playing = true;
//Clear the spectrogram canvas
var canvas = document.getElementById("canvas2");
var context = canvas.getContext("2d");
context.fillStyle = "rgb(255,255,255)";
context.fillRect (0, 0, spectrogramCanvasWidth, spectrogramCanvasHeight);
// Start visualizer.
requestAnimFrame(drawVisualisation);
}
Because of this, my volume bar thingy has stopped working. I also tried doing "Highpass filter" but it's displaying the same thing. I'm confused and have no one else to ask. By the way, the person I asked didn't help but just said it's messy...
Appreciate all of the help guys and thank you!
So, there's a lot of context missing because of how you posted this - e.g. you don't have your drawVisualisation() code, and you don't explain exactly what you mean by your "volume bar thingy has stopped working".
My guess is that it's just that you have a graph that connects your source node to the output (audiocontext.destination) three times in parallel - through the analyser (which is a pass-thru, and is connected to the output), through the filter, AND through the gain node. Your analyser in this case would show the unfiltered signal output only (you won't see any effect from the the filter, because that's a parallel signal path), and the actual output is summing three chains of the source node (one through the filter, one through the analyser, one through the gain node) - which might have some odd phasing effects, but will also triple the volume (approximately) and quite possibly clip.
Your graph looks like this:
source → destination
↳ filter → destination
↳ gain → destination
What you probably want is to connect each of these nodes in series, like this:
source → filter → gain → destination
I think you want something like this:
// Play the sound.
function playSound(buffer) {
aSoundSource = audioContext.createBufferSource(); // creates a sound source.
aSoundSource.buffer = buffer; // tell the source which sound to play.
//Create Filter
var filter = audioContext.createBiquadFilter();
//Create and specify parameters for Low-Pass Filter
filter.type = "lowpass"; //Low pass filter
filter.frequency.value = 440;
//Create the gain node
gainNode = audioContext.createGain();
//Set the current volume
var volume = document.getElementById('volume').value;
gainNode.gain.value = volume;
//Set up the audio graph
aSoundSource.connect(filter);
filter.connect(gainNode);
gainNode.connect(analyser);
analyser.connect(audioContext.destination);
aSoundSource.start(0); // play the source now.
aSoundSource.connect(gainNode); //Connect the source to the gain node
//Set the playing flag
playing = true;
//Clear the spectrogram canvas
var canvas = document.getElementById("canvas2");
var context = canvas.getContext("2d");
context.fillStyle = "rgb(255,255,255)";
context.fillRect (0, 0, spectrogramCanvasWidth, spectrogramCanvasHeight);
// Start visualizer.
requestAnimFrame(drawVisualisation);
}
Related
I would like to create a Web Audio panner to position the sound from a WebRTC stream.
I have the stream connecting OK and can hear the audio and see the video, but the panner does not have any effect on the audio (changing panner.setPosition(10000, 0, 0) to + or - 10000 makes no difference to the sound).
This is the onaddstream function where the audio and video get piped into a video element and where I presume i need to add the panner.
There are no errors, it just isn't panning at all.
What am I doing wrong?
Thanks!
peer_connection.onaddstream = function(event) {
var AudioContext = window.AudioContext || window.webkitAudioContext;
var audioCtx = new AudioContext();
audioCtx.listener.setOrientation(0,0,-1,0,1,0)
var panner = audioCtx.createPanner();
panner.panningModel = 'HRTF';
panner.distanceModel = 'inverse';
panner.refDistance = 1;
panner.maxDistance = 10000;
panner.rolloffFactor = 1;
panner.coneInnerAngle = 360;
panner.coneOuterAngle = 0;
panner.coneOuterGain = 0;
panner.setPosition(10000, 0, 0); //this doesn't do anything
peerInput.connect(panner);
panner.connect(audioCtx.destination);
// attach the stream to the document element
var remote_media = USE_VIDEO ? $("<video>") : $("<audio>");
remote_media.attr("autoplay", "autoplay");
if (MUTE_AUDIO_BY_DEFAULT) {
remote_media.attr("muted", "false");
}
remote_media.attr("controls", "");
peer_media_elements[peer_id] = remote_media;
$('body').append(remote_media);
attachMediaStream(remote_media[0], event.stream);
}
Try to get the event stream before setting the panner
var source = audioCtx.createMediaStreamSource(event.stream);
Reference: Mozilla Developer Network - AudioContext
CreatePaneer Refernce: Mozilla Developer Network - createPanner
3rd Party Library: wavesurfer.js
Remove all the options you've set for the panner node and see if that helps. (The cone angles seem a little funny to me, but I always forget how they work.)
If that doesn't work, create a smaller test with the panner but use a simple oscillator as the input. Play around with the parameters and positions to make sure it does what you want.
Put this back into your app. Things should work then.
Figured this out for myself.
The problems was not the code, it was because I was connected with Bluetooth audio.
Bluetooth apparently can only do stereo audio with the microphone turned off. As soon as you activate the mic, that steals one of the channels and audio output downgrades to mono.
If you have mono audio, you definitely cannot do 3D positioned sound, hence me thinking the code was not working.
I'm currently trying to create an audio visualisation using an web audio api, namely I'm attempting to produce lissajou figures from a given audio source.
I came across this post, but I'm missing some preconditions. How can I get the time domain data for the left / right channels? Currently it seems I'm only getting the merged data.
Any help or hint would be much appreciated.
$(document).ready(function () {
var audioCtx = new (window.AudioContext || window.webkitAudioContext)();
var audioElement = document.getElementById('audioElement');
var audioSrc = audioCtx.createMediaElementSource(audioElement);
var analyser = audioCtx.createAnalyser();
// Bind analyser to media element source.
audioSrc.connect(analyser);
audioSrc.connect(audioCtx.destination);
//var timeDomainData = new Uint8Array(analyser.frequencyBinCount);
var timeDomainData = new Uint8Array(200);
// loop and update time domain data array.
function renderChart() {
requestAnimationFrame(renderChart);
// Copy frequency data to timeDomainData array.
analyser.getByteTimeDomainData(timeDomainData);
// debugging: print to console
console.log(timeDomainData);
}
// Run the loop
renderChart();
});
The observation is correct, the waveform is the down-mixed result. From the current spec (my emphasis):
Copies the current down-mixed time-domain (waveform) data into the
passed unsigned byte array. [...]
To get around this you could use a channel splitter (createChannelSplitter()) and assign each channel to two separate analyzer nodes.
For more details on createChannelSplitter() see this link.
I have a simple synth that plays a note for some length of time:
// Creating audio graph
var audioCtx = new (window.AudioContext || window.webkitAudioContext)();
var oscillator = audioCtx.createOscillator();
var gainNode = audioCtx.createGain();
oscillator.connect(gainNode);
gainNode.connect(audioCtx.destination);
// Setting parameters
oscillator.type = "sine";
oscillator.frequency.value = 2500;
// Run audio graph
var currentTime = offlineCtx.currentTime;
oscillator.start(currentTime);
oscillator.stop(currentTime + 1);
How can I get the PCM data of the sound the synthesiser makes? I've managed to do this with audio samples by using decodeAudioData, but I can't find an equivalent for an audio graph that isn't based on loading a sample.
I specifically want to render the audio graph with the OfflineAudioContext since I only care about retrieving the PCM data as fast as possible.
Thanks!
You say you want to use an offline context and then you don't actually use an offline context. So you should do
var offlineCtx = new OfflineAudioContext(nc, length, rate)
where nc = number of channels, length is the number of samples, and rate is the sample rate you want to use.
Create your graph, start everything and then do
offlineCtx.startRendering().then(function (buffer) {
// buffer has the PCM data you want. Save it somewhere,
// or whatever
})
(I'm not sure all browsers support promises from an offline context. If not, use offlineCtx.oncomplete to get the data. See the spec.)
Eventually I found an answer here: http://www.pp4s.co.uk/main/tu-sms-audio-recording.html#co-tu-sms-audio-recording__js but you will not like it. Apparently, the Audio API is not yes standardized enough for this to work on all browsers. So I have been able to run the code above in Firefox, but not Chrome.
Basic ideas:
use dest = ac.createMediaStreamDestination(); to get a destination
of the sound
use new MediaRecorder(dest.stream); to get a recorder
use the MediaRecorder ondataavailable and stop events to get the data and combine it into a Blob
I'm trying to use the web audio API to create an audio stream with the left and right channels generated with different oscillators. The output of the left channel is correct, but the right channel is 0. Based on the spec, I can't see what I'm doing wrong.
Tested in Chrome dev.
Code:
var context = new AudioContext();
var l_osc = context.createOscillator();
l_osc.type = "sine";
l_osc.frequency.value = 100;
var r_osc = context.createOscillator();
r_osc.type = "sawtooth";
r_osc.frequency.value = 100;
// Combine the left and right channels.
var merger = context.createChannelMerger(2);
merger.channelCountMode = "explicit";
merger.channelInterpretation = "discrete";
l_osc.connect(merger, 0, 0);
r_osc.connect(merger, 0, 1);
var dest_stream = context.createMediaStreamDestination();
merger.connect(dest_stream);
// Dump the generated waveform to a MediaStream output.
l_osc.start();
r_osc.start();
var track = dest_stream.stream.getAudioTracks()[0];
var plugin = document.getElementById('plugin');
plugin.postMessage(track);
The channelInterpretation means the merger node will mix the stereo oscillator connections to two channels each - but then because you have an explicit channelCountMode, it's stacking the two-channels-per-connection to get four channels and (because it's explicit) just dropping the top two channels. Unfortunately the second two channels are the two channels from the second input - so it loses all channels from the second connection.
In general, you shouldn't need to mess with the channelCount interpretation stuff, and it will do the right thing for stereo.
I'm experimenting with the Web Audio API and my goal is to create a digital guitar where each string has an initial sound source of an actual guitar playing the string open and then I would like to generate all other fret position sounds dynamically. After some research into the subject (this is all pretty new to me) it sounded like this might be achieved by altering the frequency of the source sound sample.
The problem is I've seen lots of algorithms for altering synthesized sin waves but nothing to alter the frequency of an audio sample. Here is a sample of my code to give a better idea of how i'm trying to implement this:
// Guitar chord buffer
var chordBuffer = null;
// Create audio context
var context = new webkitAudioContext();
// Load sound sample
var request = new XMLHttpRequest();
request.open('GET', 'chord.mp3', true);
request.responseType = 'arraybuffer';
request.onload = loadChord;
request.send();
// Handle guitar string "pluck"
$('.string').mouseenter(function(e){
e.preventDefault();
var source = context.createBufferSource();
source.buffer = chordBuffer;
// Create javaScriptNode so we can get at raw audio buffer
var jsnode = context.createJavaScriptNode(1024, 1, 1);
jsnode.onaudioprocess = changeFrequency;
// Connect nodes and play
source.connect(jsnode);
jsnode.connect(context.destination);
source.noteOn(0);
});
function loadChord() {
context.decodeAudioData(
request.response,
function(pBuffer) { chordBuffer = pBuffer; },
function(pError) { console.error(pError); }
);
}
function changeFrequency(e) {
var ib = e.inputBuffer.getChannelData(0);
var ob = e.outputBuffer.getChannelData(0);
var n = ib.length;
for (var i = 0; i < n; ++i) {
// Code needed...
}
}
So there you have it - I can play the sound just fine but am at a bit of a lose when to comes to creating the code in the changeFrequency function which would change the chord samples frequency so it sounded like another fret position on the string. Any help with this code would be appreciated or opinions on whether what I'm attempting to do is even possible.
Thanks!
playbackRate will change the pitch of the sound, but also its playback time.
If you want to change only the pitch, maybe you can use a pitch shifter. Check my javascript pitch shifter implementation here and its use with a JavascriptNode in this plugin
You can get the desired behavior by setting playbackRate, but as Brad says, you're going to have to use multi-sampling. Also see this SO question: Setting playbackRate on audio element connected to web audio api.