I create three different linear chirps using the code found here on SO. With some other code snippets I save those three sounds as separate .wav files. This works so far.
Now I want to play those three sounds at the exact same time. So I thought I could use the WebAudio API, feed three oscillator nodes with the float arrays I got from the code above.
But I don't get at least one oscillator node to play its sound.
My code so far (shrinked for one oscillator)
var osc = audioCtx.createOscillator();
var sineData = linearChirp(freq, (freq + signalLength), signalLength, audioCtx.sampleRate); // linearChirp from link above
// sine values; add 0 at the front because the docs states that the first value is ignored
var imag = Float32Array.from(sineData.unshift(0));
var real = new Float32Array(imag.length); // cos values
var customWave = audioCtx.createPeriodicWave(real, imag);
osc.setPeriodicWave(customWave);
osc.start();
At the moment I suppose that I do not quite understand the whole the math behind the peridioc wave.
The code that plays the three sounds at the same time works (with simple sin values in the oscillator nodes), so I assume that the problem is my peridioc wave.
Another question: is there a different way? Maybe like using three MediaElementAudioSourceNode that are linked to my three .wav files. I don't see a way to play them at the exact same time.
The PeriodicWave isn't a "stick a waveform in here and it will be used as a single oscillation" feature - it builds a waveform through specifying the relative strengths of various harmonics. Note that in that code you pointed to, they create a BufferSource node and point its .buffer to the results of linearchirp(). You can do that, too - just use BufferSource nodes to play back the linearshirp() outputs, which (I think?) are just sine waves anyway? (If so, you could just use an oscillator and skip that whole messy "create a buffer" bit.)
If you just want to play back the buffers you've created, use BufferSource. If you want to create complex harmonics, use PeriodicWave. If you've created a single-cycle waveform and you want to play it back as a source waveform, use BufferSource and loop it.
Related
I found a javascript that captures the current microphone input just to send it out again. You can see it here:
https://codepen.io/MyXoToD/pen/bdb1b834b15aaa4b4fcc8c7b50c23a6f?editors=1010 (only works with https).
I was wondering how I can generate the "negative" waves of the sound captured. Just like "noise cancellation" works. So the script detects the current noise around me and whenever there is a wave going up I want to generate a wave that is going down to cancel each other out. Is there a way to do this in javascript?
So this outputs the current sound recored:
var input = audio_context.createMediaStreamSource(stream);
volume = audio_context.createGain();
volume.gain.value = 0.8;
input.connect(volume);
volume.connect(audio_context.destination);
The question is how to produce the negative of this sound and output this instead.
You're already streaming your signal through a Web Audio chain. What you need to do is chain an AudioWorker node between the input and output nodes (after or before the gain node, doesn't matter). The script for the worker would be a very simple sign inversion.
var negationWorker = audio_context.createAudioWorker( "negate.js" )
var negationNode = negationWorker.createNode()
input.connect(negationNode)
negationNode.connect(volume)
I have been looking around for creating an audio equalizer using the Web audio API: http://webaudio.github.io/web-audio-api/
I found a lot of threads about creating a visualizer, but that is of course not what I want to do. I simply want to be able to alter the sound using frequency sliders. I found that the biquadFilter should do the work, but I can't get a good result. The sound is altered consistently when I change any frequency value, but it just lowers the quality of the sound while it should alter the frequencies.
I first load a sound:
Audio.prototype.init = function(callback){
var $this = this;
this.gainScale = d3.scale.linear().domain([0,1]).range([-40,40]);
this.context = new AudioContext();
this.loadSounds(function(){
$this.loadSound(0);
$this.play();
callback.call();
});
};
Everything works well, the sound plays when ready.
I have 10 sliders for frequencies [32,64,125,250,500,1000,2000,4000,8000,16000].
For each slider I create a filter and I connect it to the source, as is described here: Creating a 10-Band Equalizer Using Web Audio API :
Audio.prototype.createFilter = function(index,frequency){
if(this.filters == undefined) this.filters = [];
var filter = this.context.createBiquadFilter();
filter = this.context.createBiquadFilter();
filter.type = 2;
filter.frequency.value = frequency;
// Connect source to filter, filter to destination.
this.source.connect(filter);
filter.connect(this.context.destination);
this.filters[index] = filter;
};
Finally, when I change the value of a slider I update the filter:
Audio.prototype.updateFilter = function(index,newVal){
this.filters[index].frequency.gain = this.gainScale(newVal);
};
NB: my this.gainScale function takes as input a value in [0,1] and returns a value in [-40,40] to set the gain between -40 and 40 for each frequency.
Would appreciate any help !
Multiple things here.
1) You shouldn't use bandpass filters in parallel to implement an equalizer. Among other issues, biquad filtering changes the phase of different parts of the signal, and therefore the different bands will end up in different phases, and you'll have some potentially quite bad effects on your sound when it recombines.
2) The approach that you want is have a low shelf filter on the bottom end, a high shelf filter on the top end, and any number of peaking filters in the middle. These should be connected in series (i.e. the input signal connects to one filter, which connects to another filter, which connects to another filter, et al, and only the final filter should get connected to the audiocontext.destination. The Q values should be tuned (see below), and the gain on the filter determines the boost/cut. (For flat response, all filter gains should be set to zero.)
3) filter.type is an enumerated type that you should set as a string, not as a number. "lowshelf", "highshelf" and "peaking" are the ones you're looking for here.
You can see an example of a simple three-band equalizer in my DJ app - https://github.com/cwilso/wubwubwub/blob/MixTrack/js/tracks.js#L189-L207 sets it up. To modify this into a multiband equalizer, you'll need to tweak the Q value of each filter to get the bands to not overlap too much (it's not bad if they do overlap, but your bands will be more precise if you tune them). You can use http://googlechrome.github.io/web-audio-samples/samples/audio/frequency-response.html to examine the frequency response for a given Q and filter type.
One issue is that you want your sliders to be controlling the gain of the filter at a given frequency, not the filter frequency itself. According to the spec the gain of a bandpass filter is not controllable which is a bit limiting. Fortunately you can put a gain node at the end of each filter.
var filter = this.context.createBiquadFilter();
filter = this.context.createBiquadFilter();
filter.type = 2;
filter.frequency.value = frequency;
var gain = this.context.createGainNode();
// Connect source to filter, filter to the gain, gain to destination.
this.source.connect(filter);
filter.connect(gain);
gain.connect(this.context.destination);
this.filters[index] = filter;
this.gains[index] = gain;
Next you'll need to connect your slider up to the gain parameter of the gain control. I don't really know web audio so I'll leave that to you. The last thing is that you need to to specify the Q of the filter. I get the impression from your list of frequencies that you're trying to create octave wide filters so the Q factor is probably going to be around 1.414. You're really going to need to do a bit of research if you want to get this right.
I'm wondering if there's a way to take existing images and "stack" them to create a single asset in Javascript.
http://imgur.com/a/ajkBh
The above image album shows what I'd like to do.
Basically, for the game I'm making, I want to procedurally generate enemy NPC's and the like, drawing from a pool of different body parts. Each potential body part would have stats and a spritesheet attached to it, so when the character is randomly generated, I want to stack all of the necessary images together into a single asset that I can then use.
Is there any way to do this?
Canvas is a very basic drawing API with the ability to draw a few basic shapes, strokes and fills. Other than filling with the background color, and/or clearing the whole canvas, there's no way to animate scenes with "sprites" or any complete objects sitting on top of each other, using only the basic canvas API. Copying images in is possible, but then you need to clear them every single frame and replace them, which is a lot of code overhead, if you want them to animate.
You should look into http://createjs.com or a similar "screen graph" type framework, something that sits on top of the canvas and lets you easily load up sprite sheets and move them around. It does the drawing, clearing, rotation, animation etc. of the canvas for you (basically making it a bit like Flash).
In terms of purely stacking or drawing on the canvas, yes you can grab an image and copy it directly onto the canvas using the context2d.drawImage method, but this is probably not going to achieve the effect you want by itself.
You can build up your animation out of existing parts, if think the main issue is organizing the base artworks and having the drawing done to fit one with another.
Let's say :
You want an idle (line 1), walk (line 2), run (line 3)
on each line you have a constant number of frames, say 5.
Say also that your parts are : legs, body, arms, head.
Then you have to build the image by yourself, by stacking those images :
function buildAnimation(legs, body, arms, head) {
var resImg = document.createElement('canvas');
resImg.width = legs.width; resImg.height = legs.height;
var resCtx = resImg.getContext('2d');
resCtx.drawImage(legs,0,0);
resCtx.drawImage(body,0,0);
resCtx.drawImage(arms,0,0);
resCtx.drawImage(head,0,0);
return resImg;
}
then you can feed your game framework with this image, that will be used for an
animation.
The drawback of this method is that you have to draw all animations of all parts
at the same places each time.
Issues :
1) for the head for instance, you might not want to animate it.
2) you might want different height for different characters.
3) it's a lot of work !!
So you might decide of conventions to know where the parts should be drawn, and have
less part to prepare in an image, but a more complex way to build them.
Short example : the file name of the image parts ends with their height, so you can retrieve them
easily. (bodyMonster48.png, bodyHead12.png, ...)
Writing everything would be too much work here, but just a short example :
say we have animWidth, animHeight the size of each anim, and five frames in each
of the 3 anims. Now we just have one head that we want to copy everywhere :
function buildAnimation(animWidth, animHeight, legs, body, arms, head) {
var resImg = document.createElement('canvas');
resImg.width = legs.width; resImg.height = legs.height;
var resCtx = resImg.getContext('2d');
resCtx.drawImage(legs,0,0);
resCtx.drawImage(body,0,0);
resCtx.drawImage(arms,0,0);
// copy the head in all frames of all anims
for (var animLine=0; animLine<3; animLine++) { // iterate in idle, walk, run
for (var animFrame= 0; animFrame<5; animFrame++) { // iterate in images of the animation
resCtx.drawImage(head, animFrame*animWidth, animLine*animHeight);
}
}
return resImg;
}
To be able to build any combination with variable height, you'll
have to carefully parametrize everything, use file naming and positioning conventions,
and you'll surely need a whole helper class not to get lost in all combinations.
For a project, I'm retrieving a live audio stream via WebSockets from a Java server. On the server, I'm processing the samples in 16Bit/8000hz/mono in the form of 8-bit signed byte values (with two bytes making up one sample). On the browser, however, the lowest supported samplerate is 22050 hz. So my idea was to "simply" upsample the existing 8000 to 32000 hz, which is supported and seems to me like an easy calculation.
So far, I've tried linear upsampling and cosine interpolation, but both didn't work. In addition to sounding really distorted, the first one also added some clicking noises. I might also have trouble with the WebAudioAPI in Chrome, but at least the sound is playing and is barely recognizable as what it should be. So I guess no codec- or endianess-problem.
Here's the complete code that gets executed when a binary packet with sound data is received. I'm creating new buffers and buffersources all the time for the sake of simplicity (yeah, no good for performance). data is an ArrayBuffer. First, I'm converting the samples to Float, then I'm upsampling.
//endianess-aware buffer view
var bufferView=new DataView(data),
//the audio buffer to set for output
buffer=_audioContext.createBuffer(1,640,32000),
//reference to underlying buffer array
buf=buffer.getChannelData(0),
floatBuffer8000=new Float32Array(160);
//16Bit => Float
for(var i=0,j=null;i<160;i++){
j=bufferView.getInt16(i*2,false);
floatBuffer8000[i]=(j>0)?j/32767:j/-32767;
}
//convert 8000 => 32000
var point1,point2,point3,point4,mu=0.2,mu2=(1-Math.cos(mu*Math.PI))/2;
for(var i=0,j=0;i<160;i++){
//index for dst buffer
j=i*4;
//the points to interpolate between
point1=floatBuffer8000[i];
point2=(i<159)?floatBuffer8000[i+1]:point1;
point3=(i<158)?floatBuffer8000[i+2]:point1;
point4=(i<157)?floatBuffer8000[i+3]:point1;
//interpolate
point2=(point1*(1-mu2)+point2*mu2);
point3=(point2*(1-mu2)+point3*mu2);
point4=(point3*(1-mu2)+point4*mu2);
//put data into buffer
buf[j]=point1;
buf[j+1]=point2;
buf[j+2]=point3;
buf[j+3]=point4;
}
//playback
var node=_audioContext.createBufferSource(0);
node.buffer=buffer;
node.connect(_audioContext.destination);
node.noteOn(_audioContext.currentTime);
Finally found a solution for this. The conversion from 16Bit to Float is wrong, it just needs to be
floatBuffer8000[i]=j/32767.0;
Also, feeding the API with a lot of small samples doesn't work well, so you need to buffer some samples and play them together.
Is there a possibility to render an visualization of an audio file?
Maybe with SoundManager2 / Canvas / HTML5 Audio?
Do you know some technics?
I want to create something like this:
You have a tone of samples and tutorials here : http://www.html5rocks.com/en/tutorials/#webaudio
For the moment it work in the last Chrome and the last last Firefox (Opera ?).
Demos : http://www.chromeexperiments.com/tag/audio/
To do it now, for all visitors of a web site, you can check SoundManagerV2.js who pass through a flash "proxy" to access audio data http://www.schillmania.com/projects/soundmanager2/demo/api/ (They already work on the HTML5 audio engine, to release it as soon as majors browsers implement it)
Up to you for drawing in a canvas 3 differents audio data : WaveForm, Equalizer and Peak.
soundManager.defaultOptions.whileplaying = function() { // AUDIO analyzer !!!
$document.trigger({ // DISPATCH ALL DATA RELATIVE TO AUDIO STREAM // AUDIO ANALYZER
type : 'musicLoader:whileplaying',
sound : {
position : this.position, // In milliseconds
duration : this.duration,
waveformDataLeft : this.waveformData.left, // Array of 256 floating-point (three decimal place) values from -1 to 1
waveformDataRight: this.waveformData.right,
eqDataLeft : this.eqData.left, // Containing two arrays of 256 floating-point (three decimal place) values from 0 to 1
eqDataRight : this.eqData.right, // ... , the result of an FFT on the waveform data. Can be used to draw a spectrum (frequency range)
peakDataLeft : this.peakData.left, // Floating-point values ranging from 0 to 1, indicating "peak" (volume) level
peakDataRight : this.peakData.right
}
});
};
With HTML5 you can get :
var freqByteData = new Uint8Array(analyser.frequencyBinCount);
var timeByteData = new Uint8Array(analyser.frequencyBinCount);
function onaudioprocess() {
analyser.getByteFrequencyData(freqByteData);
analyser.getByteTimeDomainData(timeByteData);
/* draw your canvas */
}
Time to work ! ;)
Run samples through an FFT, and then display the energy within a given range of frequencies as the height of the graph at a given point. You'll normally want the frequency ranges going from around 20 Hz at the left to roughly the sampling rate/2 at the right (or 20 KHz if the sampling rate exceeds 40 KHz).
I'm not so sure about doing this in JavaScript though. Don't get me wrong: JavaScript is perfectly capable of implementing an FFT -- but I'm not at all sure about doing it in real time. OTOH, for user viewing, you can get by with around 5-10 updates per second, which is likely to be a considerably easier target to reach. For example, 20 ms of samples updated every 200 ms might be halfway reasonable to hope for, though I certainly can't guarantee that you'll be able to keep up with that.
http://ajaxian.com/archives/amazing-audio-sampling-in-javascript-with-firefox
Check out the source code to see how they're visualizing the audio
This isn't possible yet except by fetching the audio as binary data and unpacking the MP3 (not JavaScript's forte), or maybe by using Java or Flash to extract the bits of information you need (it seems possible but it also seems like more headache than I personally would want to take on).
But you might be interested in Dave Humphrey's audio experiments, which include some cool visualization stuff. He's doing this by making modifications to the browser source code and recompiling it, so this is obviously not a realistic solution for you. But those experiments could lead to new features being added to the <audio> element in the future.
For this you would need to do a Fourier transform (look for FFT) which will be slow in javascript, and not possible in realtime at present.
If you really want to do this in the browser, I would suggest doing it in java/silverlight, since they deliver the fastest number crunching speed in the browser.