I'm trying to gradually change the frequency amount of my lowpass filter, but instead of happening gradually, it happens instantly.
This code should start at a frequency, exponentially decrease to 200 at 1 second in, then stop at 2 seconds in. Instead it stays the same until 1 second where it instantly jumps to the lower frequency.
var context = new (window.AudioContext || window.webkitAudioContext)();
var oscillator = context.createOscillator();
var now = context.currentTime;
//lowpass node
var lowPass = context.createBiquadFilter();
lowPass.connect(context.destination);
lowPass.frequency.value = 500;
lowPass.Q.value = 0.5;
lowPass.frequency.exponentialRampToValueAtTime(200, now + 1);
oscillator.connect(lowPass);
oscillator.start(now);
oscillator.stop(now + 2);
edit: I just realized it does actually work in chrome. But I mainly use firefox, can I just not use webaudio yet?
The AudioParam interface has 2 modes:
A. Immediate
B. Automated
In mode A you simply set the value property of the parameter.
In mode B, if you want to 'ramp' from 500 to 200 you have
to use an automation event first to set the value to 500, eg:
frequency.setValueAtTime(500, 0)
A startTime parameter of zero applies the value immediately
according to the specs.
What you are trying is to intermingle both modes, but the latter
does not take the value of the first into account.
Related
I'm trying to add the following sound effects to some audio files, then grab their audio buffers and convert to .mp3 format
Fade-out the first track
Fade in the following tracks
A background track (and make it background by giving it a small gain node)
Another track that will serve as the more audible of both merged tracks
Fade out the previous one and fade in the first track again
I observed the effects that are returned by the AudioParam class as well as those from the GainNode interface are attached to the context's destination instead of to the buffer itself. Is there a technique to espouse the AudioParam instance values (or the gain property) to the buffers so when I merge them into one ultimate buffer, they can still retain those effects? Or do those effects only have meaning to the destination (meaning I must connect on the sourceNode) and output them via OfflineContexts/startRendering? I tried this method previously and was told on my immediate last thread that I only needed one BaseAudioContext and it didn't have to be an OfflineContext. I think to have various effects on various files, I need several contexts, thus I'm stuck in the dilemma of various AudioParams and GainNodes but directly implicitly invoking them by calling start will inadvertently lose their potency.
The following snippets demonstrate the effects I'm referring to, while the full code can be found at https://jsfiddle.net/ka830Lqq/3/
var beginNodeGain = overallContext.createGain(); // Create a gain node
beginNodeGain.gain.setValueAtTime(1.0, buffer.duration - 3); // make the volume high 3 secs before the end
beginNodeGain.gain.exponentialRampToValueAtTime(0.01, buffer.duration); // reduce the volume to the minimum for the duration of expRampTime - setValTime i.e 3
// connect the AudioBufferSourceNode to the gainNode and the gainNode to the destination
begin.connect(beginNodeGain);
Another snippet goes thus
function handleBg (bgBuff) {
var bgContext = new OfflineAudioContext(bgBuff.numberOfChannels, finalBuff[0].length, bgBuff.sampleRate), // using a new context here so we can utilize its individual gains
bgAudBuff = bgContext.createBuffer(bgBuff.numberOfChannels, finalBuff[0].length, bgBuff.sampleRate),
bgGainNode = bgContext.createGain(),
smoothTrans = new Float32Array(3);
smoothTrans[0] = overallContext.currentTime; // should be 0, to usher it in but is actually between 5 and 6
smoothTrans[1] = 1.0;
smoothTrans[2] = 0.4; // make it flow in the background
bgGainNode.gain.setValueAtTime(0, 0); //currentTime here is 6.something-something
bgGainNode.gain.setValueCurveAtTime(smoothTrans, 0, finalBuff.pop().duration); // start at `param 2` and last for `param 3` seconds. bgBuff.duration
for (var channel = 0; channel < bgBuff.numberOfChannels; channel++) {
for (var j = 0; j < finalBuff[0].length; j++) {
var data = bgBuff.getChannelData(channel),
loopBuff = bgAudBuff.getChannelData(channel),
oldBuff = data[j] != void(0) ? data[j] : data[j - data.length];
loopBuff[j] = oldBuff;
}
}
// instead of piping them to the output speakers, merge them
mixArr.push(bgAudBuff);
gottenBgBuff.then(handleBody());
}
Based on the web audio analyser API, I am creating an audio animation that draws images based on the real time frequency spectrum (like the classical bar graphics that move to the frequency of the sound, except that it is not bars that are drawn but something more complex).
It works fine, my only issue is that I am not able to stop the image at a precise time.
When I want to have it stopped at let's say 5 seconds, then some times it stops at 5.000021, or 5.000013, or 5.0000098, ...
and the problem is that the frequency spectrum (and so my image based on this frequency spectrum) is not the same at 5.000021, or 5.000013, or 5.0000098, ...
This means that the user when he wants to see the image corresponding to 5s, every time he sees a slightly different image, and I would like to have only one image corresponding to 5s (often the image is only slightly different at every try, but sometimes the differences are quite huge).
Here are extracts of my code:
var ctx = new AudioContext();
var soundmp3 = document.getElementById('soundmp3');
soundmp3.src = URL.createObjectURL(this.files[0]);
var audioSrc = ctx.createMediaElementSource(soundmp3);
var analyser = ctx.createAnalyser();
analyser.fftSize = 2048;
analyser.smoothingTimeConstant = 0.95;
audioSrc.connect(analyser);
audioSrc.connect(ctx.destination);
var frequencyData = new Uint8Array(analyser.frequencyBinCount);
function renderFrame() {
if(framespaused) return;
drawoneframe();
requestAnimationFrame(renderFrame);
};
function drawoneframe(){
analyser.getByteFrequencyData(frequencyData);
// drawing of my image ...
};
function gotomp3(timevalue){
soundmp3.pause();
newtime = timevalue;
backtime = newtime - 0.2000;
soundmp3.currentTime = backtime;
soundmp3.play();
function boucle(){
if(soundmp3.currentTime >= timevalue){
if(Math.abs(soundmp3.currentTime-newtime) <= 0.0001){
drawoneframe();
soundmp3.pause();
soundmp3.currentTime = timeatgetfrequency;
return;
} else {
soundmp3.pause();
soundmp3.currentTime = backtime;
soundmp3.play();
};
}
setTimeout(boucle, 1);
};
boucle();
};
document.getElementById("fieldtime").onkeydown = function (event) {if (event.which == 13 || event.keyCode == 13) {
gotomp3(document.getElementById("fieldtime").value);
return false;
}
return true;
};
Code explanation: if the user enters a value in the "fieldtime" (= newtime) and hits enter, then the I go first 0.2s back, start playing and stop when the currentTime is very close to newtime (I have to go back first, because when I go directly to newtime and stop immediately afterwards then the analyser.getByteFrequencyData does not yet have the values at newtime). With boucle() I manage to get it stopped at very precise times: if newtime = 5, then the time when drawoneframe(); is called is 5.000xx but the problem is that every time the user enters 5 as newtime, the image that is shown is slightly different.
So my question: has someone an idea how I could achieve that every time the user enters the same time as newtime, the image will be exactly the same ?
I am not quite aware at which times the soundmp3.currentTime is updated ? With a samplerate of 44.1kHz, I guess it is something like every 0.0227ms, but does this mean that it is updated exactly at 0, 0.0227ms, 0.0454ms, ...or just approximately ?
I thought about smoothing the analyser results, so that there are less variations for small time variations. Setting analyser.smoothingTimeConstant to a value close to 1 is not sufficient. But maybe there is another way to do it.
I do not need high precision results, I just would like that if a user wants to see the image corresponding to x seconds, then each time he enters x seconds, he sees exactly the same image.
Thank you very much for any help.
Mireille
I wrote the following code to play a note with a contoured lowpass filter:
var ac = new AudioContext;
var master = ac.createGain();
master.connect(ac.destination);
master.gain.value = 0.7;
var filter = ac.createBiquadFilter();
filter.connect(master);
filter.type = 'lowpass';
filter.Q.value = 2;
var osc = ac.createOscillator();
osc.connect(filter);
osc.type = 'square';
osc.frequency.value = 55;
var now = ac.currentTime;
osc.start(now);
//osc.stop(now+0.2);
filter.frequency.setValueAtTime(0, now);
filter.frequency.linearRampToValueAtTime(440, now+0.02);
filter.frequency.linearRampToValueAtTime(0, now+0.12);
The note sounds as expected (the filter opens quickly, then completely closes a bit more slowly) but at the very end of it I can hear a click. The lower the note, the louder the click.
I already tried uncommenting the commented line, as well as adding a contour to the master, but nothing worked.
Edit: By "adding contour to the master" I mean I tried ramping down the master gain to 0 exactly at the same time as the filter reaches 0. This would not work.
How can I prevent the click at the end of the note?
While, intuitively, lowering a filter to zero should cut off all signal, filters don't "cut off" all signal above the specified frequency. So you should expect some energy to be in the signal even with the filter you've created.
However, filters DO cut off signal in direct proportion to how far those signals are from the cutoff frequency. So it makes perfect sense that, the lower your signal is, the louder the pop of turning it off is (because lower frequencies are closer to your cutoff of zero and therefore less attenuated by the filter).
You can solve this problem by doing a linear ramp of the GAIN of the signal immediately before you schedule the signal to stop altogether. You can ramp down in the last millisecond or so in order to avoid a pop.
Describe what "adding a contour to the master" means.
In your example, I would ramp down the master gain to zero just before the filter frequency goes to 0.
Short version
I need to divide an audio signal by another one (amplitude-wise). How could I accomplish this in the Web Audio API, without using ScriptProcessorNode? (with ScriptProcessorNode the task is trivial, but it is completely unusable for production due to the inherent performance issues)
Long version
Consider two audio sources, two OscillatorNodes for example, oscA and oscB:
var oscA = audioCtx.createOscillator();
var oscB = audioCtx.createOscillator();
Now, consider that these oscillators are LFOs, both with low (i.e. <20Hz) frequencies, and that their signals are used to control a single destination AudioParam, for example, the gain of a GainNode. Through various routing setups, we can define mathematical operations between these two signals.
Addition
If oscA and oscB are both directly connected to the destination AudioParam, their outputs are added together:
var dest = audioCtx.createGain();
oscA.connect(dest.gain);
oscB.connect(dest.gain);
Subtraction
If the output of oscB is first routed through another GainNode with a gain of -1, which is then connected to the destination AudioParam, then the output of oscB is effectively subtracted from that of oscA, because we are effectively doing an oscA + -oscB op. Using this trick we can subtract one signal from another one:
var dest = audioCtx.createGain();
var inverter = audioCtx.createGain();
oscA.connect(dest.gain);
oscB.connect(inverter);
inverter.gain = -1;
inverter.connect(dest.gain);
Multiplication
Similarly, if the output of oscA is connected to another GainNode, and the output of oscB is connected to the gain AudioParam of that GainNode, then oscB is multiplying the signal of oscA:
var dest = audioCtx.createGain();
var multiplier = audioCtx.createGain();
oscA.connect(multiplier);
oscB.connect(multiplier.gain);
multiplier.connect(dest.gain);
Division (?)
Now, I want the output of oscB to divide the output of oscA. How do I do this, without using ScriptProcessorNode?
Edit
My earlier, absolutely ridiculous attempts at solving this problem were:
Using a PannerNode to control the positionZ param, which did yield a result that decreased as signal B (oscB) increased, but it was completely off (e.g. it yielded 12/1 = 8.5 and 12/2 = 4.2) -- now this value can be compensated for by using a GainNode with its gain set to 12 / 8.48528099060058593750 (approximation), but it only supports values >=1
Using an AnalyserNode to rapidly sample the audio signal and then use that value (LOL)
Edit 2
The reason why the ScriptProcessorNode is essentially useless for applications more complex than a tech demo is that:
it executes audio processing on the main thread (!), and heavy UI work will introduce audio glitches
a single, dead simple ScriptProcessorNode will take 5% CPU power on a modern device, as it performs processing with JavaScript and requires data to be passed between the audio thread (or rendering thread) and the main thread (or UI thread)
It should be noted, that ScriptProcessorNode is deprecated.
If you need A/B, therefore you need 1/B, inverted signal. You can use WaveShaperNode to make the inversion. This node needs an array of corresponding values. Inversion means that -1 becomes -1, -0.5 becomes -2 etc.
In addition, make sure that you are aware of division by zero. You have to handle it. In the following code I just take the next value after zero and double it.
function makeInverseCurve( ) {
var n_samples = 44100,
curve = new Float32Array(n_samples),
x;
for (var i = 0 ; i < n_samples; i++ ) {
x = i * 2 / n_samples - 1;
// if x = 0, let reverse value be twice the previous
curve[i] = (i * 2 == n_samples) ? n_samples : 1 / x;
}
return curve;
};
Working fiddle is here. If you remove .connect(distortion) out of the audio chain, you see a simple sine wave. Visualization code got from sonoport.
To demonstrate this (I'm working on a much more compressed Fiddle), open this in chrome, and move Jay-Z using your arrow keys and catch about 4 - 5 (sometimes more!) cakes.
You will notice that there is a massive cupcake on the left side of the screen now.
I update the cakes' positions in my handleTick function, and add new cakes on a time interval. Here are both of those:
/*This function must exist after the Stage is initialized so I can keep popping cakes onto the canvas*/
function make_cake(){
var path = queue.getItem("cake").src;
var cake = new createjs.Bitmap(path);
var current_cakeWidth = cake.image.width;
var current_cakeHeight = cake.image.height;
var desired_cakeWidth = 20;
var desired_cakeHeight = 20;
cake.x = 0;
cake.y = Math.floor((Math.random()*(stage.canvas.height-35))+1); //Random number between 1 and 10
cake.scaleX = desired_cakeWidth/current_cakeWidth;
cake.scaleY = desired_cakeHeight/current_cakeHeight;
cake.rotation = 0;
cake_tray.push(cake);
stage.addChild(cake);
}
And the setInterval part:
setInterval(function(){
if (game_on == true){
if (cake_tray.length < 5){
make_cake();
}
}
else{
;
}
},500);
stage.update is also called from handleTick.
Here is the entire JS file
Thanks for looking into this. Note once again that this only happens in Chrome, I have not seen it happen on Firefox. Not concerned with other browsers at this time.
Instead of using the source of your item, it might make more sense to use the actual loaded image. By passing the source, the image may have a 0 width/height at first, resulting in scaling issues.
// This will give you an actual image reference
var path = queue.getResult("cake");