Dividing one audio signal by another one - javascript

Short version
I need to divide an audio signal by another one (amplitude-wise). How could I accomplish this in the Web Audio API, without using ScriptProcessorNode? (with ScriptProcessorNode the task is trivial, but it is completely unusable for production due to the inherent performance issues)
Long version
Consider two audio sources, two OscillatorNodes for example, oscA and oscB:
var oscA = audioCtx.createOscillator();
var oscB = audioCtx.createOscillator();
Now, consider that these oscillators are LFOs, both with low (i.e. <20Hz) frequencies, and that their signals are used to control a single destination AudioParam, for example, the gain of a GainNode. Through various routing setups, we can define mathematical operations between these two signals.
Addition
If oscA and oscB are both directly connected to the destination AudioParam, their outputs are added together:
var dest = audioCtx.createGain();
oscA.connect(dest.gain);
oscB.connect(dest.gain);
Subtraction
If the output of oscB is first routed through another GainNode with a gain of -1, which is then connected to the destination AudioParam, then the output of oscB is effectively subtracted from that of oscA, because we are effectively doing an oscA + -oscB op. Using this trick we can subtract one signal from another one:
var dest = audioCtx.createGain();
var inverter = audioCtx.createGain();
oscA.connect(dest.gain);
oscB.connect(inverter);
inverter.gain = -1;
inverter.connect(dest.gain);
Multiplication
Similarly, if the output of oscA is connected to another GainNode, and the output of oscB is connected to the gain AudioParam of that GainNode, then oscB is multiplying the signal of oscA:
var dest = audioCtx.createGain();
var multiplier = audioCtx.createGain();
oscA.connect(multiplier);
oscB.connect(multiplier.gain);
multiplier.connect(dest.gain);
Division (?)
Now, I want the output of oscB to divide the output of oscA. How do I do this, without using ScriptProcessorNode?
Edit
My earlier, absolutely ridiculous attempts at solving this problem were:
Using a PannerNode to control the positionZ param, which did yield a result that decreased as signal B (oscB) increased, but it was completely off (e.g. it yielded 12/1 = 8.5 and 12/2 = 4.2) -- now this value can be compensated for by using a GainNode with its gain set to 12 / 8.48528099060058593750 (approximation), but it only supports values >=1
Using an AnalyserNode to rapidly sample the audio signal and then use that value (LOL)
Edit 2
The reason why the ScriptProcessorNode is essentially useless for applications more complex than a tech demo is that:
it executes audio processing on the main thread (!), and heavy UI work will introduce audio glitches
a single, dead simple ScriptProcessorNode will take 5% CPU power on a modern device, as it performs processing with JavaScript and requires data to be passed between the audio thread (or rendering thread) and the main thread (or UI thread)

It should be noted, that ScriptProcessorNode is deprecated.
If you need A/B, therefore you need 1/B, inverted signal. You can use WaveShaperNode to make the inversion. This node needs an array of corresponding values. Inversion means that -1 becomes -1, -0.5 becomes -2 etc.
In addition, make sure that you are aware of division by zero. You have to handle it. In the following code I just take the next value after zero and double it.
function makeInverseCurve( ) {
var n_samples = 44100,
curve = new Float32Array(n_samples),
x;
for (var i = 0 ; i < n_samples; i++ ) {
x = i * 2 / n_samples - 1;
// if x = 0, let reverse value be twice the previous
curve[i] = (i * 2 == n_samples) ? n_samples : 1 / x;
}
return curve;
};
Working fiddle is here. If you remove .connect(distortion) out of the audio chain, you see a simple sine wave. Visualization code got from sonoport.

Related

How to gradually change lowpass frequency in webaudio?

I'm trying to gradually change the frequency amount of my lowpass filter, but instead of happening gradually, it happens instantly.
This code should start at a frequency, exponentially decrease to 200 at 1 second in, then stop at 2 seconds in. Instead it stays the same until 1 second where it instantly jumps to the lower frequency.
var context = new (window.AudioContext || window.webkitAudioContext)();
var oscillator = context.createOscillator();
var now = context.currentTime;
//lowpass node
var lowPass = context.createBiquadFilter();
lowPass.connect(context.destination);
lowPass.frequency.value = 500;
lowPass.Q.value = 0.5;
lowPass.frequency.exponentialRampToValueAtTime(200, now + 1);
oscillator.connect(lowPass);
oscillator.start(now);
oscillator.stop(now + 2);
edit: I just realized it does actually work in chrome. But I mainly use firefox, can I just not use webaudio yet?
The AudioParam interface has 2 modes:
A. Immediate
B. Automated
In mode A you simply set the value property of the parameter.
In mode B, if you want to 'ramp' from 500 to 200 you have
to use an automation event first to set the value to 500, eg:
frequency.setValueAtTime(500, 0)
A startTime parameter of zero applies the value immediately
according to the specs.
What you are trying is to intermingle both modes, but the latter
does not take the value of the first into account.

How to attach sound effects to an AudioBuffer

I'm trying to add the following sound effects to some audio files, then grab their audio buffers and convert to .mp3 format
Fade-out the first track
Fade in the following tracks
A background track (and make it background by giving it a small gain node)
Another track that will serve as the more audible of both merged tracks
Fade out the previous one and fade in the first track again
I observed the effects that are returned by the AudioParam class as well as those from the GainNode interface are attached to the context's destination instead of to the buffer itself. Is there a technique to espouse the AudioParam instance values (or the gain property) to the buffers so when I merge them into one ultimate buffer, they can still retain those effects? Or do those effects only have meaning to the destination (meaning I must connect on the sourceNode) and output them via OfflineContexts/startRendering? I tried this method previously and was told on my immediate last thread that I only needed one BaseAudioContext and it didn't have to be an OfflineContext. I think to have various effects on various files, I need several contexts, thus I'm stuck in the dilemma of various AudioParams and GainNodes but directly implicitly invoking them by calling start will inadvertently lose their potency.
The following snippets demonstrate the effects I'm referring to, while the full code can be found at https://jsfiddle.net/ka830Lqq/3/
var beginNodeGain = overallContext.createGain(); // Create a gain node
beginNodeGain.gain.setValueAtTime(1.0, buffer.duration - 3); // make the volume high 3 secs before the end
beginNodeGain.gain.exponentialRampToValueAtTime(0.01, buffer.duration); // reduce the volume to the minimum for the duration of expRampTime - setValTime i.e 3
// connect the AudioBufferSourceNode to the gainNode and the gainNode to the destination
begin.connect(beginNodeGain);
Another snippet goes thus
function handleBg (bgBuff) {
var bgContext = new OfflineAudioContext(bgBuff.numberOfChannels, finalBuff[0].length, bgBuff.sampleRate), // using a new context here so we can utilize its individual gains
bgAudBuff = bgContext.createBuffer(bgBuff.numberOfChannels, finalBuff[0].length, bgBuff.sampleRate),
bgGainNode = bgContext.createGain(),
smoothTrans = new Float32Array(3);
smoothTrans[0] = overallContext.currentTime; // should be 0, to usher it in but is actually between 5 and 6
smoothTrans[1] = 1.0;
smoothTrans[2] = 0.4; // make it flow in the background
bgGainNode.gain.setValueAtTime(0, 0); //currentTime here is 6.something-something
bgGainNode.gain.setValueCurveAtTime(smoothTrans, 0, finalBuff.pop().duration); // start at `param 2` and last for `param 3` seconds. bgBuff.duration
for (var channel = 0; channel < bgBuff.numberOfChannels; channel++) {
for (var j = 0; j < finalBuff[0].length; j++) {
var data = bgBuff.getChannelData(channel),
loopBuff = bgAudBuff.getChannelData(channel),
oldBuff = data[j] != void(0) ? data[j] : data[j - data.length];
loopBuff[j] = oldBuff;
}
}
// instead of piping them to the output speakers, merge them
mixArr.push(bgAudBuff);
gottenBgBuff.then(handleBody());
}

Get Final Output Frequency of Chained Oscillators

I've set up a web page with a theremin and I'm trying to change the color of a web page element based on the frequency of the note being played. The way I'm generating sound right now looks like this:
osc1 = page.audioCX.createOscillator();
pos = getMousePos(page.canvas, ev);
osc1.frequency.value = pos.x;
gain = page.audioCX.createGain();
gain.gain.value = 60;
osc2 = page.audioCX.createOscillator();
osc2.frequency.value = 1;
osc2.connect(gain);
gain.connect(osc1.frequency);
osc1.connect(page.audioCX.destination);
What this does is oscillate the pitch of the sound created by osc1. I can change the color to the frequency of osc1 by using osc1.frequency.value, but this doesn't factor in the changes applied by the other parts.
How can I get the resultant frequency from those chained elements?
You have to do the addition yourself (osc1.frequency.value + output of gain).
The best current (but see below) way to get access to the output of gain is probably to use a ScriptProcessorNode. You can just use the last sample from each buffer passed to the ScriptProcessorNode, and set the buffer size based on how frequently you want to update the color.
(Note on ScriptProcessorNode: There is a bug in Chrome and Safari that makes ScriptProcessorNode not work if it doesn't have at least one output channel. You'll probably have to create it with one input and one output, have it send all zeros to the output, and connect it to the destination, to get it to work.)
Near-future answer: You can also try using an AnalyserNode, but under the current spec, the time domain data can only be read from an AnalyserNode as bytes, which means the floating point samples are being converted to be in the range [0, 255] in some unspecified way (probably scaling the range [-1, 1] to [0, 255], so the values you need would be clipped). The latest draft spec includes a getFloatTimeDomainData method, which is probably your cleanest solution. It seems to have already been implemented in Chrome, but not Firefox, as far as I can tell.

speex splitted audio data - WebAudio - VOIP

Im running a little app that encodes and decodes an audio array with the speex codec in javascript: https://github.com/dbieber/audiorecorder
with a small array filled with a sin waveform
for(var i=0;i<16384;i++)
data.push(Math.sin(i/10));
this works. But I want to build a VOIP application and have more than one array. So if I split my array up in 2 parts encode>decode>merge, it doesn't sound the same as before.
Take a look at this:
fiddle: http://jsfiddle.net/exh63zqL/
Both buttons should give the same audio output.
How can i get the same output in both ways ? Is their a special mode in speex.js for split audio data?
Speex is a lossy codec, so the output is only an approximation of your initial sine wave.
Your sine frequency is about 7 KHz, which is near the upper codec 8KHz bandwith and as such even more likely to be altered.
What the codec outputs looks like a comb of dirach pulses that will sound like your initial sinusoid as heard through a phone, which is certainly different from the original.
See this fiddle where you can listen to what the codec makes of your original sine waves, be them split in half or not.
//Generate a continus sinus in 2 arrays
var len = 16384;
var buffer1 = [];
var buffer2 = [];
var buffer = [];
for(var i=0;i<len;i++){
buffer.push(Math.sin(i/10));
if(i < len/2)
buffer1.push(Math.sin(i/10));
else
buffer2.push(Math.sin(i/10));
}
//Encode and decode both arrays seperatly
var en = Codec.encode(buffer1);
var dec1 = Codec.decode(en);
var en = Codec.encode(buffer2);
var dec2 = Codec.decode(en);
//Merge the arrays to 1 output array
var merge = [];
for(var i in dec1)
merge.push(dec1[i]);
for(var i in dec2)
merge.push(dec2[i]);
//encode and decode the whole array
var en = Codec.encode(buffer);
var dec = Codec.decode(en);
//-----------------
//Down under is only for playing the 2 different arrays
//-------------------
var audioCtx = new window.AudioContext || new window.webkitAudioContext;
function play (sound)
{
var audioBuffer = audioCtx.createBuffer(1, sound.length, 44100);
var bufferData = audioBuffer.getChannelData(0);
bufferData.set(sound);
var source = audioCtx.createBufferSource();
source.buffer = audioBuffer;
source.connect(audioCtx.destination);
source.start();
}
$("#o").click(function() { play(dec); });
$("#c1").click(function() { play(dec1); });
$("#c2").click(function() { play(dec2); });
$("#m").click(function() { play(merge); });
If you merge the two half signal decoder outputs, you will hear an additional click due to the abrupt transition from one signal to the other, sounding basically like a relay commutation.
To avoid that you would have to smooth the values around the merging point of your two buffers.
Note that Speex is a lossy codec. So, by definition, it can't give same result as the encoded buffer. Besides, it designed to be a codec for voice. So the 1-2 kHz range will be the most efficient as it expects a specific form of signal. In some way, it can be compared to JPEG technology for raster images.
I've modified slightly your jsfiddle example so you can play with different parameters and compare results. Just providing a simple sinusoid with an unknown frequency is not a proper way to check a codec. However, in the example you can see different impact on the initial signal at different frequency.
buffer1.push(Math.sin(2*Math.PI*i*frequency/sampleRate));
I think you should build an example with a recorded voice and compare results in this case. It would be more proper.
In general to get the idea in detail you would have to examine digital signal processing. I can't even provide a proper link since it is a whole science and it is mathematically intensive. (the only proper book for reading I know is in Russian). If anyone here with strong mathematics background can share proper literature for this case I would appreciate.
EDIT: as mentioned by Kuroi Neko, there is a trouble with the boundaries of the buffer. And seems like it is impossible to save decoder state as mentioned in this post, because the library in use doesn't support it. If you look at the source code you see that they use a third party speex codec and do not provide full access to it's features. I think the best approach would be to find a decent library for speex that supports state recovery similar to this

How can I set a phase offset for an OscillatorNode in the Web Audio API?

I'm trying to implement Stereo Phase as it is described here: http://www.image-line.com/support/FLHelp/html/plugins/3x%20OSC.htm
"Stereo Phase (SP) - Allows you to set different phase offset for the left and right channels of the generator. The offset results in the oscillator starting at a different point on the oscillator's shape (for example, start at the highest value of the sine function instead at the zero point). Stereo phase offset adds to the richness and stereo panorama of the sound produced."
I'm trying to achieve this for an OscillatorNode. My only idea is to use createPeriodicWave (https://dvcs.w3.org/hg/audio/raw-file/tip/webaudio/specification.html#dfn-createPeriodicWave) However, the description of create periodic wave from the specification is above my understanding and I have not found any examples via Google.
Any help in deciphering the description of createPeriodicWave would be helpful as would any other ideas about how to achieve this effect.
Thanks!
Mcclellan and others,
This answer helped and subsequently warped me into the world of Fourier. With the help of a page on the subject and some Wikipedia, I think I got the square and sawtooth patterns down, but the triangle pattern still eludes me. Does anyone know?
It indeed gives you the ability to phase shift, as this article by Nick Thompson explains (although he calls the AudioContext methods differently, but the principle is the same).
As far as the square and sawtooth patterns go:
var n = 4096;
var real = new Float32Array(n);
var imag = new Float32Array(n);
var ac = new AudioContext();
var osc = ac.createOscillator();
/* Pick a wave pattern */
/* Sine
imag[1] = 1; */
/* Sawtooth
for(x=1;x<n;x++)
imag[x] = 2.0 / (Math.pow(-1, x) * Math.PI * x); */
/* Square */
for(x=1;x<n;x+=2)
imag[x] = 4.0 / (Math.PI * x);
var wave = ac.createPeriodicWave(real, imag);
osc.setPeriodicWave(wave);
osc.connect(ac.destination);
osc.start();
osc.stop(2); /* Have it stop after 2 seconds */
This will play the activated pattern, the square pattern in this case. What would the triangle formula look like?
A simple way to fake it would be to add separate delay nodes to the left and right channels, and give them user-controlled delay values. This would be my approach and will have more-or-less the same effect as a phase setting.
If you want to use createPeriodicWave, unfortunately you'll probably have to understand the somewhat difficult math behind it.
Basically, you'll first have to represent your waveform as a sum of sine wave "partials". All periodic waves have some representation of this form. Then, once you've found the relative magnitudes of each partial, you'll have to phase shift them separately for left and right channels by multiplying each by a complex number. You can read more details about representing periodic waves as sums of sine waves here: http://music.columbia.edu/cmc/musicandcomputers/chapter3/03_03.php
Using createPeriodicWave has a significant advantage over using a BufferSourceNode: createPeriodicWave waveforms will automatically avoid aliasing. It's rather difficult to avoid aliasing if you're generating the waveforms "by hand" in a buffer.
I do not think that it would be possible to have a phase offset using a OscillatorNode.
A way to do that would be to use context.createBuffer and generate a sine wave buffer (or any type of wave that you want) and set it as the buffer for a BufferSourceNode and then use the offset parameter in its start() method. But you need to calculate the sample offset amount in seconds.
var buffer = context.createBuffer(1,1024,44100);
var data = buffer.getChannelData(0);
for(var i=0; i < data.length; i++){
//generate waveform
}
var osc = context.createBufferSourceNode();
osc.buffer = buffer;
osc.loop = true;
osc.connect(context.destination);
osc.start(time, offset in seconds);
By the looks of this article on Wolfram the triangle wave can be established like this:
/* Triangle */
for(x=1;x<n;x+=2)
imag[x] = 8.0 / Math.pow(Math.PI, 2) * Math.pow(-1, (x-1)/2) / Math.pow(x, 2) * Math.sin(Math.PI * x);
Also helpful by the way is the Wikipedia page that actually shows how the Fourier constructions work.
function getTriangleWave(imag, n) {
for (var i = 1; i < n; i+=2) {
imag[i] = (8*Math.sin(i*Math.PI/2))/(Math.pow((Math.PI*i), 2));
}
return imag;
}
With Chrome 66 adding AudioWorklet's, you can write sound processing programs the same way as the now deprecated ScriptProcessorNode.
I made a convenience library using this that's a normal WebAudio API OscillatorNode that can also have its phase (among other things) varied. You can find it here
const context = new AudioContext()
context.audioWorklet.addModule("worklet.js").then(() => {
const osc = new BetterOscillator(context)
osc.parameters.get("phase").value = Math.PI / 4
osc.connect(context.destination)
osc.start()
}

Categories