Web audio API equalizer - javascript

I have been looking around for creating an audio equalizer using the Web audio API: http://webaudio.github.io/web-audio-api/
I found a lot of threads about creating a visualizer, but that is of course not what I want to do. I simply want to be able to alter the sound using frequency sliders. I found that the biquadFilter should do the work, but I can't get a good result. The sound is altered consistently when I change any frequency value, but it just lowers the quality of the sound while it should alter the frequencies.
I first load a sound:
Audio.prototype.init = function(callback){
var $this = this;
this.gainScale = d3.scale.linear().domain([0,1]).range([-40,40]);
this.context = new AudioContext();
this.loadSounds(function(){
$this.loadSound(0);
$this.play();
callback.call();
});
};
Everything works well, the sound plays when ready.
I have 10 sliders for frequencies [32,64,125,250,500,1000,2000,4000,8000,16000].
For each slider I create a filter and I connect it to the source, as is described here: Creating a 10-Band Equalizer Using Web Audio API :
Audio.prototype.createFilter = function(index,frequency){
if(this.filters == undefined) this.filters = [];
var filter = this.context.createBiquadFilter();
filter = this.context.createBiquadFilter();
filter.type = 2;
filter.frequency.value = frequency;
// Connect source to filter, filter to destination.
this.source.connect(filter);
filter.connect(this.context.destination);
this.filters[index] = filter;
};
Finally, when I change the value of a slider I update the filter:
Audio.prototype.updateFilter = function(index,newVal){
this.filters[index].frequency.gain = this.gainScale(newVal);
};
NB: my this.gainScale function takes as input a value in [0,1] and returns a value in [-40,40] to set the gain between -40 and 40 for each frequency.
Would appreciate any help !

Multiple things here.
1) You shouldn't use bandpass filters in parallel to implement an equalizer. Among other issues, biquad filtering changes the phase of different parts of the signal, and therefore the different bands will end up in different phases, and you'll have some potentially quite bad effects on your sound when it recombines.
2) The approach that you want is have a low shelf filter on the bottom end, a high shelf filter on the top end, and any number of peaking filters in the middle. These should be connected in series (i.e. the input signal connects to one filter, which connects to another filter, which connects to another filter, et al, and only the final filter should get connected to the audiocontext.destination. The Q values should be tuned (see below), and the gain on the filter determines the boost/cut. (For flat response, all filter gains should be set to zero.)
3) filter.type is an enumerated type that you should set as a string, not as a number. "lowshelf", "highshelf" and "peaking" are the ones you're looking for here.
You can see an example of a simple three-band equalizer in my DJ app - https://github.com/cwilso/wubwubwub/blob/MixTrack/js/tracks.js#L189-L207 sets it up. To modify this into a multiband equalizer, you'll need to tweak the Q value of each filter to get the bands to not overlap too much (it's not bad if they do overlap, but your bands will be more precise if you tune them). You can use http://googlechrome.github.io/web-audio-samples/samples/audio/frequency-response.html to examine the frequency response for a given Q and filter type.

One issue is that you want your sliders to be controlling the gain of the filter at a given frequency, not the filter frequency itself. According to the spec the gain of a bandpass filter is not controllable which is a bit limiting. Fortunately you can put a gain node at the end of each filter.
var filter = this.context.createBiquadFilter();
filter = this.context.createBiquadFilter();
filter.type = 2;
filter.frequency.value = frequency;
var gain = this.context.createGainNode();
// Connect source to filter, filter to the gain, gain to destination.
this.source.connect(filter);
filter.connect(gain);
gain.connect(this.context.destination);
this.filters[index] = filter;
this.gains[index] = gain;
Next you'll need to connect your slider up to the gain parameter of the gain control. I don't really know web audio so I'll leave that to you. The last thing is that you need to to specify the Q of the filter. I get the impression from your list of frequencies that you're trying to create octave wide filters so the Q factor is probably going to be around 1.414. You're really going to need to do a bit of research if you want to get this right.

Related

Get the position of 3d objects in Facebook AR and changing them via script

I have a 3d object which i want to "move" from A to B via script. I am not too sure how to go about it; I don't understand the Facebook documents. Just a short example as a start would be great.
I assume something along the lines:
var object = Scene.root.find("object");
var lastPosX = object.transform.positionX.lastValue;
object.transform.positionX = //NOT SURE HOW TO PUT THE NEW POSITION
What you need to do is use the AnimationModule - here is a simple example of how to do that:
const Animation = require('Animation');
var obj = Scene.root.find("object");
//set up the length of the animations, 1000 = 1 second
var driver = Animation.timeDriver({durationMilliseconds: 1000});
//define the starting and ending values (start at 0, go to 100)
var sampler = Animation.samplers.linear(0, 100);
//create an animation signal to control the object x position
obj.transform.x = Animation.animate(driver, sampler);
//start the animation
driver.start();
Animation in ARS, like many other things, is based around the concept of "Reactive Programming" and working with "Signals" which are values that change over time. It is essential to get a good grasp of what a signal is and how it works to write useful code in ARS. Read through this for an introductory overview: https://developers.facebook.com/docs/ar-studio/scripting/basics
The above is a very basic example, but there is much more interesting, advanced, and complex effects that you can achieve using the AnimationModule, take a look at the documentation here for more information: https://developers.facebook.com/docs/ar-studio/reference/classes/animationmodule/
Hope this helps!

How to modify object from updates and remove old keys?

I am creating a multiplayer game and I have an object in javascript, with a number of keys and values.
This object is called players, for holding information about each player that is connected to the game server.
name is the key of the object, and then the value of the object is a Player object which holds information such as x, y, level, etc.
Constantly I am sending a request to the server to get updated information about the players.
Because this is happening very often, I don't want the players object to be reset every time (players = {}), so instead, I am updating the object with any new information.
At the moment I am checking if name in players, and if so, I update the object like this:
players[name].x = x;
players[name].y = y;
// etc.
Otherwise, I simply create a new Player object, with the information and add it to the players object. (If a new player connected for instance)
The problem is, if a player that is already in players is no longer in the updated information from the server (i.e the player disconnected), how do I go about removing them from the object.
Is it necessary to loop trough players, and if the player is no longer in the updated information, remove it from the object, or is there any simpler way of doing this?
If there is no other way, is it a better approach to just reset the dictionary and add the data? It feels like that isn't the best way to do something simple like this.
Here is my code so far:
var newplayers = new info from server;
for(var i=0; i<newplayers.length; i++)
{
var pl = newplayers[i];
var name = pl.name;
var x = pl.x;
var y = pl.y;
// etc.
if(name in players)
{
players[name].x = x;
players[name].y = y;
// etc.
} else
newplayer = new Player();
newplayer.x = x;
newplayer.y = y;
// etc.
players[name] = newplayer;
}
}
// What if the player is no longer in the updated info, but still in players?
All help appreciated! Thanks in advance!
So you have a choice between removing outdated data from your players dictionary or rebuild it from scratch every time?
I think the answer depends a lot on how much data you have. If you have at most 20 players, it probably doesn't matter too much. If you have 1 million players it's different.
If you want to be sure, the best thing to do would be to measure it. Try both solutions with the biggest number of players you want to be able to handle and see what the impact on performance is.
Or just go with the simplest implementation and see if it's good enough for your purpose. No point in optimising before you need it.
Personally I'd just loop through players to remove the outdated data. If the performance is not good enough, then I'd optimise.

WebAudio - Oscillator setPeridiocWave

I create three different linear chirps using the code found here on SO. With some other code snippets I save those three sounds as separate .wav files. This works so far.
Now I want to play those three sounds at the exact same time. So I thought I could use the WebAudio API, feed three oscillator nodes with the float arrays I got from the code above.
But I don't get at least one oscillator node to play its sound.
My code so far (shrinked for one oscillator)
var osc = audioCtx.createOscillator();
var sineData = linearChirp(freq, (freq + signalLength), signalLength, audioCtx.sampleRate); // linearChirp from link above
// sine values; add 0 at the front because the docs states that the first value is ignored
var imag = Float32Array.from(sineData.unshift(0));
var real = new Float32Array(imag.length); // cos values
var customWave = audioCtx.createPeriodicWave(real, imag);
osc.setPeriodicWave(customWave);
osc.start();
At the moment I suppose that I do not quite understand the whole the math behind the peridioc wave.
The code that plays the three sounds at the same time works (with simple sin values in the oscillator nodes), so I assume that the problem is my peridioc wave.
Another question: is there a different way? Maybe like using three MediaElementAudioSourceNode that are linked to my three .wav files. I don't see a way to play them at the exact same time.
The PeriodicWave isn't a "stick a waveform in here and it will be used as a single oscillation" feature - it builds a waveform through specifying the relative strengths of various harmonics. Note that in that code you pointed to, they create a BufferSource node and point its .buffer to the results of linearchirp(). You can do that, too - just use BufferSource nodes to play back the linearshirp() outputs, which (I think?) are just sine waves anyway? (If so, you could just use an oscillator and skip that whole messy "create a buffer" bit.)
If you just want to play back the buffers you've created, use BufferSource. If you want to create complex harmonics, use PeriodicWave. If you've created a single-cycle waveform and you want to play it back as a source waveform, use BufferSource and loop it.

Get Final Output Frequency of Chained Oscillators

I've set up a web page with a theremin and I'm trying to change the color of a web page element based on the frequency of the note being played. The way I'm generating sound right now looks like this:
osc1 = page.audioCX.createOscillator();
pos = getMousePos(page.canvas, ev);
osc1.frequency.value = pos.x;
gain = page.audioCX.createGain();
gain.gain.value = 60;
osc2 = page.audioCX.createOscillator();
osc2.frequency.value = 1;
osc2.connect(gain);
gain.connect(osc1.frequency);
osc1.connect(page.audioCX.destination);
What this does is oscillate the pitch of the sound created by osc1. I can change the color to the frequency of osc1 by using osc1.frequency.value, but this doesn't factor in the changes applied by the other parts.
How can I get the resultant frequency from those chained elements?
You have to do the addition yourself (osc1.frequency.value + output of gain).
The best current (but see below) way to get access to the output of gain is probably to use a ScriptProcessorNode. You can just use the last sample from each buffer passed to the ScriptProcessorNode, and set the buffer size based on how frequently you want to update the color.
(Note on ScriptProcessorNode: There is a bug in Chrome and Safari that makes ScriptProcessorNode not work if it doesn't have at least one output channel. You'll probably have to create it with one input and one output, have it send all zeros to the output, and connect it to the destination, to get it to work.)
Near-future answer: You can also try using an AnalyserNode, but under the current spec, the time domain data can only be read from an AnalyserNode as bytes, which means the floating point samples are being converted to be in the range [0, 255] in some unspecified way (probably scaling the range [-1, 1] to [0, 255], so the values you need would be clipped). The latest draft spec includes a getFloatTimeDomainData method, which is probably your cleanest solution. It seems to have already been implemented in Chrome, but not Firefox, as far as I can tell.

speex splitted audio data - WebAudio - VOIP

Im running a little app that encodes and decodes an audio array with the speex codec in javascript: https://github.com/dbieber/audiorecorder
with a small array filled with a sin waveform
for(var i=0;i<16384;i++)
data.push(Math.sin(i/10));
this works. But I want to build a VOIP application and have more than one array. So if I split my array up in 2 parts encode>decode>merge, it doesn't sound the same as before.
Take a look at this:
fiddle: http://jsfiddle.net/exh63zqL/
Both buttons should give the same audio output.
How can i get the same output in both ways ? Is their a special mode in speex.js for split audio data?
Speex is a lossy codec, so the output is only an approximation of your initial sine wave.
Your sine frequency is about 7 KHz, which is near the upper codec 8KHz bandwith and as such even more likely to be altered.
What the codec outputs looks like a comb of dirach pulses that will sound like your initial sinusoid as heard through a phone, which is certainly different from the original.
See this fiddle where you can listen to what the codec makes of your original sine waves, be them split in half or not.
//Generate a continus sinus in 2 arrays
var len = 16384;
var buffer1 = [];
var buffer2 = [];
var buffer = [];
for(var i=0;i<len;i++){
buffer.push(Math.sin(i/10));
if(i < len/2)
buffer1.push(Math.sin(i/10));
else
buffer2.push(Math.sin(i/10));
}
//Encode and decode both arrays seperatly
var en = Codec.encode(buffer1);
var dec1 = Codec.decode(en);
var en = Codec.encode(buffer2);
var dec2 = Codec.decode(en);
//Merge the arrays to 1 output array
var merge = [];
for(var i in dec1)
merge.push(dec1[i]);
for(var i in dec2)
merge.push(dec2[i]);
//encode and decode the whole array
var en = Codec.encode(buffer);
var dec = Codec.decode(en);
//-----------------
//Down under is only for playing the 2 different arrays
//-------------------
var audioCtx = new window.AudioContext || new window.webkitAudioContext;
function play (sound)
{
var audioBuffer = audioCtx.createBuffer(1, sound.length, 44100);
var bufferData = audioBuffer.getChannelData(0);
bufferData.set(sound);
var source = audioCtx.createBufferSource();
source.buffer = audioBuffer;
source.connect(audioCtx.destination);
source.start();
}
$("#o").click(function() { play(dec); });
$("#c1").click(function() { play(dec1); });
$("#c2").click(function() { play(dec2); });
$("#m").click(function() { play(merge); });
If you merge the two half signal decoder outputs, you will hear an additional click due to the abrupt transition from one signal to the other, sounding basically like a relay commutation.
To avoid that you would have to smooth the values around the merging point of your two buffers.
Note that Speex is a lossy codec. So, by definition, it can't give same result as the encoded buffer. Besides, it designed to be a codec for voice. So the 1-2 kHz range will be the most efficient as it expects a specific form of signal. In some way, it can be compared to JPEG technology for raster images.
I've modified slightly your jsfiddle example so you can play with different parameters and compare results. Just providing a simple sinusoid with an unknown frequency is not a proper way to check a codec. However, in the example you can see different impact on the initial signal at different frequency.
buffer1.push(Math.sin(2*Math.PI*i*frequency/sampleRate));
I think you should build an example with a recorded voice and compare results in this case. It would be more proper.
In general to get the idea in detail you would have to examine digital signal processing. I can't even provide a proper link since it is a whole science and it is mathematically intensive. (the only proper book for reading I know is in Russian). If anyone here with strong mathematics background can share proper literature for this case I would appreciate.
EDIT: as mentioned by Kuroi Neko, there is a trouble with the boundaries of the buffer. And seems like it is impossible to save decoder state as mentioned in this post, because the library in use doesn't support it. If you look at the source code you see that they use a third party speex codec and do not provide full access to it's features. I think the best approach would be to find a decent library for speex that supports state recovery similar to this

Categories