WebAudio sounds from wave point - javascript

Suppose that I make a simple canvas drawing app like this:
I now have a series of points. How can I feed them to some of the WebAudio objects (an oscillator or a sound make from a byte array or something) to actually generate and play a wave out of them (in this case a sine-like wave)? What is the theory behind it?

If you have the data from your graph in an array, y, you can do something like
var buffer = context.createBuffer(1, y.length, context.sampleRate);
buffer.copyToChannel(y);
var src = context.createBufferSource();
src.buffer = buffer;
src.start()
You may need to set the sample rate in context.createBuffer to something other than context.sampleRate, depending on the data from your graph.

Related

Split stereo audio file Into AudioNodes for each channel

How can i split a stereo audio file (I'm currently working with a WAV, but I'm interested in how to do it for MP3 as well, if that's different) into left and right channels to feed into two separate Fast Fourier Transforms (FFT) from the P5.sound.js library.
I've written out what I think I need to be doing below in the code, but I haven't been able to find examples of anyone doing this through Google searches and all my layman's attempts are turning up nothing.
I'll share what I have below, but in all honesty, it's not much. Everything in question would go in the setup function where I've made a note:
//variable for the p5 sound object
var sound = null;
var playing = false;
function preload(){
sound = loadSound('assets/leftRight.wav');
}
function setup(){
createCanvas(windowWidth, windowHeight);
background(0);
// I need to do something here to split the audio and return a AudioNode for just
// the left stereo channel. I have a feeling it's something like
// feeding audio.getBlob() to a FileReader() and some manipulation and then converting
// the result of FileReader() to a web audio API source node and feeding that into
// fft.setInput() like justTheLeftChannel is below, but I'm not understanding how to work
// with javascript audio methods and createChannelSplitter() and the attempts I've made
// have just turned up nothing.
fft = new p5.FFT();
fft.setInput(justTheLeftChannel);
}
function draw(){
sound.pan(-1)
background(0);
push();
noFill();
stroke(255, 0, 0);
strokeWeight(2);
beginShape();
//calculate the waveform from the fft.
var wave = fft.waveform();
for (var i = 0; i < wave.length; i++){
//for each element of the waveform map it to screen
//coordinates and make a new vertex at the point.
var x = map(i, 0, wave.length, 0, width);
var y = map(wave[i], -1, 1, 0, height);
vertex(x, y);
}
endShape();
pop();
}
function mouseClicked(){
if (!playing){
sound.loop();
playing = true;
} else {
sound.stop();
playing = false;
}
}
Solution:
I'm not a p5.js expert, but I've worked with it enough that I figured there has to be a way to do this without the whole runaround of blobs / file reading. The docs aren't very helpful for complicated processing, so I dug around a little in the p5.Sound source code and this is what I came up with:
// left channel
sound.setBuffer([sound.buffer.getChannelData(0)]);
// right channel
sound.setBuffer([sound.buffer.getChannelData(1)]);
Here's a working example - clicking the canvas toggles between L/stereo/R audio playback and FFT visuals.
Explanation:
p5.SoundFile has a setBuffer method which can be used to modify the audio content of the sound file object in place. The function signature specifies that it accepts an array of buffer objects and if that array only has one item, it'll produce a mono source - which is already in the correct format to feed to the FFT! So how do we produce a buffer containing only one channel's data?
Throughout the source code there are examples of individual channel manipulation via sound.buffer.getChannelData(). I was wary of accessing undocumented properties at first, but it turns out that since p5.Sound uses the WebAudio API under the hood, this buffer is really just plain old WebAudio AudioBuffer, and the getChannelData method is well-documented.
The only downside of approach above is that setBuffer acts directly on the SoundFile so I'm loading the file again for each channel you want to separate, but I'm sure there's a workaround for that.
Happy splitting!

WebAudio - Oscillator setPeridiocWave

I create three different linear chirps using the code found here on SO. With some other code snippets I save those three sounds as separate .wav files. This works so far.
Now I want to play those three sounds at the exact same time. So I thought I could use the WebAudio API, feed three oscillator nodes with the float arrays I got from the code above.
But I don't get at least one oscillator node to play its sound.
My code so far (shrinked for one oscillator)
var osc = audioCtx.createOscillator();
var sineData = linearChirp(freq, (freq + signalLength), signalLength, audioCtx.sampleRate); // linearChirp from link above
// sine values; add 0 at the front because the docs states that the first value is ignored
var imag = Float32Array.from(sineData.unshift(0));
var real = new Float32Array(imag.length); // cos values
var customWave = audioCtx.createPeriodicWave(real, imag);
osc.setPeriodicWave(customWave);
osc.start();
At the moment I suppose that I do not quite understand the whole the math behind the peridioc wave.
The code that plays the three sounds at the same time works (with simple sin values in the oscillator nodes), so I assume that the problem is my peridioc wave.
Another question: is there a different way? Maybe like using three MediaElementAudioSourceNode that are linked to my three .wav files. I don't see a way to play them at the exact same time.
The PeriodicWave isn't a "stick a waveform in here and it will be used as a single oscillation" feature - it builds a waveform through specifying the relative strengths of various harmonics. Note that in that code you pointed to, they create a BufferSource node and point its .buffer to the results of linearchirp(). You can do that, too - just use BufferSource nodes to play back the linearshirp() outputs, which (I think?) are just sine waves anyway? (If so, you could just use an oscillator and skip that whole messy "create a buffer" bit.)
If you just want to play back the buffers you've created, use BufferSource. If you want to create complex harmonics, use PeriodicWave. If you've created a single-cycle waveform and you want to play it back as a source waveform, use BufferSource and loop it.

FFT analysis with JavaScript: How to find a specific frequency?

I've the following problem:
I analyse audio data using javascript and FFT. I can already write the FFT data into an array:
audioCtx = new AudioContext();
analyser = audioCtx.createAnalyser();
source = audioCtx.createMediaElementSource(audio);
source.connect(analyser);
analyser.connect(audioCtx.destination);
analyser.fftSize = 64;
var frequencyData = new Uint8Array(analyser.frequencyBinCount);
Every time I want to have new data I call:
analyser.getByteFrequencyData(frequencyData);
The variable "audio" is a mp3 file defined in HTML:
<audio id="audio" src="test.mp3"></audio>
So far so good.
My problem now is that I want to check if the current array "frequencyData" includes a specific frequency. For example: I place a 1000 Hz signal somewhere in the mp3 file and want to get a notification if this part of the mp3 file is currently in the array "frequencyData".
In a first step it would help me to solve the problem when the important part of the mp3 file only contains a 1000 Hz signal. In a second step I would also like to find the part if there is an overlay with music.
frequencyData is an array of amplitudes and each element of the array basically represents a range of frequencies. The size of each range is defined by the sample rate divided by the number of FFT points, 64 in your case. So if your sample rate was 48000 and your FFT size is 64 then each element covers a range of 48000/64 = 750 Hz. That means frequencyData[0] are the frequencies 0Hz-750Hz, frequencyData[1] is 750Hz-1500Hz, and so on. In this example the presence of a 1kHz tone would be seen as a peak in the first bin. Also, with such a small FFT you probably noticed that the resolution is very coarse. If you want to increase the frequency resolution you'll need to do a larger FFT.

speex splitted audio data - WebAudio - VOIP

Im running a little app that encodes and decodes an audio array with the speex codec in javascript: https://github.com/dbieber/audiorecorder
with a small array filled with a sin waveform
for(var i=0;i<16384;i++)
data.push(Math.sin(i/10));
this works. But I want to build a VOIP application and have more than one array. So if I split my array up in 2 parts encode>decode>merge, it doesn't sound the same as before.
Take a look at this:
fiddle: http://jsfiddle.net/exh63zqL/
Both buttons should give the same audio output.
How can i get the same output in both ways ? Is their a special mode in speex.js for split audio data?
Speex is a lossy codec, so the output is only an approximation of your initial sine wave.
Your sine frequency is about 7 KHz, which is near the upper codec 8KHz bandwith and as such even more likely to be altered.
What the codec outputs looks like a comb of dirach pulses that will sound like your initial sinusoid as heard through a phone, which is certainly different from the original.
See this fiddle where you can listen to what the codec makes of your original sine waves, be them split in half or not.
//Generate a continus sinus in 2 arrays
var len = 16384;
var buffer1 = [];
var buffer2 = [];
var buffer = [];
for(var i=0;i<len;i++){
buffer.push(Math.sin(i/10));
if(i < len/2)
buffer1.push(Math.sin(i/10));
else
buffer2.push(Math.sin(i/10));
}
//Encode and decode both arrays seperatly
var en = Codec.encode(buffer1);
var dec1 = Codec.decode(en);
var en = Codec.encode(buffer2);
var dec2 = Codec.decode(en);
//Merge the arrays to 1 output array
var merge = [];
for(var i in dec1)
merge.push(dec1[i]);
for(var i in dec2)
merge.push(dec2[i]);
//encode and decode the whole array
var en = Codec.encode(buffer);
var dec = Codec.decode(en);
//-----------------
//Down under is only for playing the 2 different arrays
//-------------------
var audioCtx = new window.AudioContext || new window.webkitAudioContext;
function play (sound)
{
var audioBuffer = audioCtx.createBuffer(1, sound.length, 44100);
var bufferData = audioBuffer.getChannelData(0);
bufferData.set(sound);
var source = audioCtx.createBufferSource();
source.buffer = audioBuffer;
source.connect(audioCtx.destination);
source.start();
}
$("#o").click(function() { play(dec); });
$("#c1").click(function() { play(dec1); });
$("#c2").click(function() { play(dec2); });
$("#m").click(function() { play(merge); });
If you merge the two half signal decoder outputs, you will hear an additional click due to the abrupt transition from one signal to the other, sounding basically like a relay commutation.
To avoid that you would have to smooth the values around the merging point of your two buffers.
Note that Speex is a lossy codec. So, by definition, it can't give same result as the encoded buffer. Besides, it designed to be a codec for voice. So the 1-2 kHz range will be the most efficient as it expects a specific form of signal. In some way, it can be compared to JPEG technology for raster images.
I've modified slightly your jsfiddle example so you can play with different parameters and compare results. Just providing a simple sinusoid with an unknown frequency is not a proper way to check a codec. However, in the example you can see different impact on the initial signal at different frequency.
buffer1.push(Math.sin(2*Math.PI*i*frequency/sampleRate));
I think you should build an example with a recorded voice and compare results in this case. It would be more proper.
In general to get the idea in detail you would have to examine digital signal processing. I can't even provide a proper link since it is a whole science and it is mathematically intensive. (the only proper book for reading I know is in Russian). If anyone here with strong mathematics background can share proper literature for this case I would appreciate.
EDIT: as mentioned by Kuroi Neko, there is a trouble with the boundaries of the buffer. And seems like it is impossible to save decoder state as mentioned in this post, because the library in use doesn't support it. If you look at the source code you see that they use a third party speex codec and do not provide full access to it's features. I think the best approach would be to find a decent library for speex that supports state recovery similar to this

How to draw on an HTML5 Canvas, pixel-by-pixel

Suppose that I have a 900x900 HTML5 Canvas element.
I have a function called computeRow that accepts, as a parameter, the number of a row on the grid and returns an array of 900 numbers. Each number represents a number between 0 and 200. There is an array called colors that contains an array of strings like rgb(0,20,20), for example.
Basically, what I'm saying is that I have a function that tells pixel-by-pixel, what color each pixel in a given row on the canvas is supposed to be. Running this function many times, I can compute a color for every pixel on the canvas.
The process of running computeRow 900 times takes about 0.5 seconds.
However, the drawing of the image takes much longer than that.
What I've done is I've written a function called drawRow that takes an array of 900 numbers as the input and draws them on the canvas. drawRow takes lots longer to run than computeRow! How can I fix this?
drawRow is dead simple. It looks like this:
function drawRow(rowNumber, result /* array */) {
var plot, context, columnNumber, color;
plot = document.getElementById('plot');
context = plot.getContext('2d');
// Iterate over the results for each column in the row, coloring a single pixel on
// the canvas the correct color for each one.
for(columnNumber = 0; columnNumber < width; columnNumber++) {
color = colors[result[columnNumber]];
context.fillStyle = color;
context.fillRect(columnNumber, rowNumber, 1, 1);
}
}
I'm not sure exactly what you are trying to do, so I apologize if I am wrong.
If you are trying to write a color to each pixel on the canvas, this is how you would do it:
var ctx = document.getElementById('plot').getContext('2d');
var imgdata = ctx.getImageData(0,0, 640, 480);
var imgdatalen = imgdata.data.length;
for(var i=0;i<imgdatalen/4;i++){ //iterate over every pixel in the canvas
imgdata.data[4*i] = 255; // RED (0-255)
imgdata.data[4*i+1] = 0; // GREEN (0-255)
imgdata.data[4*i+2] = 0; // BLUE (0-255)
imgdata.data[4*i+3] = 255; // APLHA (0-255)
}
ctx.putImageData(imgdata,0,0);
This is a lot faster than drawing a rectangle for every pixel. The only thing you would need to do is separate you color into rgba() values.
If you read the color values as strings from an array for each pixel it does not really matter what technique you use as the bottleneck would be that part right there.
For each pixel the cost is split on (roughly) these steps:
Look up array (really a node/linked list in JavaScript)
Get string
Pass string to fillStyle
Parse string (internally) into color value
Ready to draw a single pixel
These are very costly operations performance-wise. To get it more efficient you need to convert that color array into something else than an array with strings ahead of the drawing operations.
You can do this several ways:
If the array comes from a server try to format the array as a blob / typed array instead before sending it. This way you can copy the content of the returned array almost as-is to the canvas' pixel buffer.
Use a web workers to parse the array and pass it back as a transferable object which you them copy into the canvas' buffer. This can be copied directly to the canvas - or do it the other way around, transfer the pixel buffer to worker, fill there and return.
Sort the array by color values and update the colors by color groups. This way you can use fillStyle or calculate the color into an Uint32 value which you copy to the canvas using a Uint32 buffer view. This does not work well if the colors are very spread but works ok if the colors represent a small palette.
If you're stuck with the format of the colors then the second option is what I would recommend primarily depending on the size. It makes your code asynchronous so this is an aspect you need to deal with as well (ie. callbacks when operations are done).
You can of course just parse the array on the same thread and find a way to camouflage it a bit for the user in case it creates a noticeable delay (900x900 shouldn't be that big of a deal even for a slower computer).
If you convert the array convert it into unsigned 32 bit values and store the result in a Typed Array. This way you can iterate your canvas pixel buffer using Uint32's instead which is much faster than using byte-per-byte approach.
fillRect is meant to be used for just that - filling an area with a single color, not pixel by pixel. If you do pixel by pixel, it is bound to be slower as you are CPU bound. You can check it by observing the CPU load in these cases. The code will become more performant if
A separate image is created with the required image data filled in. You can use a worker thread to fill this image in the background. An example of using worker threads is available in the blog post at http://gpupowered.org/node/11
Then, blit the image into the 2d context you want using context.drawImage(image, dx, dy).

Categories