I just noticed that it seems not possible to change the gain.value of a gainNode with the method setValueAtTime() or setValueCurveAtTime() when there is no oscillator connected or when the oscillator has not started yet.
setValueAtTime after that the oscillator starts
For instance in this case, setValueAtTime will work as expected:
var context = new AudioContext();
var gain = context.createGain();
gain.connect(context.destination);
var osc = context.createOscillator();
osc.frequency.value = 300;
osc.connect(gain);
osc.start();
gain.gain.setValueAtTime(0, context.currentTime + 1);
The oscillator starts and the gain is 1 for 1 second. Then gain.gain.value will move to 0.
setValueAtTime before that the oscillator starts
However if we set the gain with setValueAtTime before the oscillator starts
var context = new AudioContext();
var gain = context.createGain();
gain.connect(context.destination);
var osc = context.createOscillator();
osc.frequency.value = 300;
osc.connect(gain);
osc.start(context.currentTime + 1);
gain.gain.setValueAtTime(0, context.currentTime);
The gain.gain.value will stay to 1.
Set gain.gain.value without setValueAtTime
What is strange is that this behaviour is not seen if we set the gain directly
var context = new AudioContext();
var gain = context.createGain();
gain.connect(context.destination);
var osc = context.createOscillator();
osc.frequency.value = 300;
osc.connect(gain);
osc.start(context.currentTime + 1);
gain.gain.value = 0;
The gain value will always stay to 0.
If you're using Chrome, then this is probably a bug in Chrome. Chrome actually returns the computed value in the getter, but if a node doesn't have an input but is still connected to the destination, the AudioParam automations aren't run. They should be, and the values can be inspected with the .value getter.
AudioParam.value isn't a computed value - i.e., it won't tell you the current value of what the gain really IS, just what the AudioParam.value was last set to. (cf https://webaudio.github.io/web-audio-api/#widl-AudioParam-value). If you want to know what the current value of the AudioParam truly is, you'd need to route it to an audio node and collect the data (e.g. via a scriptprocessor). In your first example, I don't think gain.gain.value should go 0.
The actual value of an AudioParam at any given point in time can be affected not only by the scheduler and the .value, but also by nodes connect()ed to the AudioParam; it would be expensive to compute those values constantly and port them back to the AudioParam.
Related
See the code below. How I understand things:
beat is a square wave oscillating between -1 and 1.
Connecting beat to multiplier.gain adds the square wave of beat to the default gain of 1. The result is a gain that oscillates between 0 and 2.
As tone is connected to multiplier, I expect to hear a tone of 440Hz for two seconds, then a pause for two seconds, then the tone again, and so on.
However, where I expect the gain to be 0, I still hear a tone, only muted. What am I doing wrong?
I tested with Chrome 74 and Firefox 66, both on Windows 10.
Code:
<!doctype html>
<meta charset=utf-8>
<script>
var context = new window.AudioContext();
var tone = context.createOscillator();
var beat = context.createOscillator();
beat.frequency.value = 0.25;
beat.type = "square";
var multiplier = context.createGain();
tone.connect(multiplier);
beat.connect(multiplier.gain);
multiplier.connect(context.destination);
tone.start();
beat.start();
</script>
<button onclick="context.resume()">Play</button>
The problem is that the 'square' type doesn't really oscillate between -1 and 1. The range is more or less from -0.848 to 0.848. Setting the GainNode's gain AudioParam to this value should work.
multiplier.gain.value = 0.848;
To see the actual output of an oscillator you could for example use Canopy. It can run Web Audio code and then visualizes the results.
If you do for example execute the following snippet, it will show you the corresponding waveform.
var osc = new OscillatorNode(context);
osc.type = "square";
osc.connect(context.destination);
osc.start();
I hope this helps.
I'm trying to gradually change the frequency amount of my lowpass filter, but instead of happening gradually, it happens instantly.
This code should start at a frequency, exponentially decrease to 200 at 1 second in, then stop at 2 seconds in. Instead it stays the same until 1 second where it instantly jumps to the lower frequency.
var context = new (window.AudioContext || window.webkitAudioContext)();
var oscillator = context.createOscillator();
var now = context.currentTime;
//lowpass node
var lowPass = context.createBiquadFilter();
lowPass.connect(context.destination);
lowPass.frequency.value = 500;
lowPass.Q.value = 0.5;
lowPass.frequency.exponentialRampToValueAtTime(200, now + 1);
oscillator.connect(lowPass);
oscillator.start(now);
oscillator.stop(now + 2);
edit: I just realized it does actually work in chrome. But I mainly use firefox, can I just not use webaudio yet?
The AudioParam interface has 2 modes:
A. Immediate
B. Automated
In mode A you simply set the value property of the parameter.
In mode B, if you want to 'ramp' from 500 to 200 you have
to use an automation event first to set the value to 500, eg:
frequency.setValueAtTime(500, 0)
A startTime parameter of zero applies the value immediately
according to the specs.
What you are trying is to intermingle both modes, but the latter
does not take the value of the first into account.
I'm trying to add the following sound effects to some audio files, then grab their audio buffers and convert to .mp3 format
Fade-out the first track
Fade in the following tracks
A background track (and make it background by giving it a small gain node)
Another track that will serve as the more audible of both merged tracks
Fade out the previous one and fade in the first track again
I observed the effects that are returned by the AudioParam class as well as those from the GainNode interface are attached to the context's destination instead of to the buffer itself. Is there a technique to espouse the AudioParam instance values (or the gain property) to the buffers so when I merge them into one ultimate buffer, they can still retain those effects? Or do those effects only have meaning to the destination (meaning I must connect on the sourceNode) and output them via OfflineContexts/startRendering? I tried this method previously and was told on my immediate last thread that I only needed one BaseAudioContext and it didn't have to be an OfflineContext. I think to have various effects on various files, I need several contexts, thus I'm stuck in the dilemma of various AudioParams and GainNodes but directly implicitly invoking them by calling start will inadvertently lose their potency.
The following snippets demonstrate the effects I'm referring to, while the full code can be found at https://jsfiddle.net/ka830Lqq/3/
var beginNodeGain = overallContext.createGain(); // Create a gain node
beginNodeGain.gain.setValueAtTime(1.0, buffer.duration - 3); // make the volume high 3 secs before the end
beginNodeGain.gain.exponentialRampToValueAtTime(0.01, buffer.duration); // reduce the volume to the minimum for the duration of expRampTime - setValTime i.e 3
// connect the AudioBufferSourceNode to the gainNode and the gainNode to the destination
begin.connect(beginNodeGain);
Another snippet goes thus
function handleBg (bgBuff) {
var bgContext = new OfflineAudioContext(bgBuff.numberOfChannels, finalBuff[0].length, bgBuff.sampleRate), // using a new context here so we can utilize its individual gains
bgAudBuff = bgContext.createBuffer(bgBuff.numberOfChannels, finalBuff[0].length, bgBuff.sampleRate),
bgGainNode = bgContext.createGain(),
smoothTrans = new Float32Array(3);
smoothTrans[0] = overallContext.currentTime; // should be 0, to usher it in but is actually between 5 and 6
smoothTrans[1] = 1.0;
smoothTrans[2] = 0.4; // make it flow in the background
bgGainNode.gain.setValueAtTime(0, 0); //currentTime here is 6.something-something
bgGainNode.gain.setValueCurveAtTime(smoothTrans, 0, finalBuff.pop().duration); // start at `param 2` and last for `param 3` seconds. bgBuff.duration
for (var channel = 0; channel < bgBuff.numberOfChannels; channel++) {
for (var j = 0; j < finalBuff[0].length; j++) {
var data = bgBuff.getChannelData(channel),
loopBuff = bgAudBuff.getChannelData(channel),
oldBuff = data[j] != void(0) ? data[j] : data[j - data.length];
loopBuff[j] = oldBuff;
}
}
// instead of piping them to the output speakers, merge them
mixArr.push(bgAudBuff);
gottenBgBuff.then(handleBody());
}
I would like to try making a synth using JavaScript, but I can't find any basic examples on how to do so.
What I have figured out from research is that it appears to be possible and that you should use a Canvas Pixel Array rather than normal ECMA arrays
I've also found info in MDN Audio, and I have seen audio elements used for continuous playback by web radio players before, although I couldn't figure out how.
My goal is to make something which allows me to synthesize continuous sin waves and play them using my keyboard without using pre-made samples.
EDIT: One of the comments below pointed me in the right direction. I'm currently working on a solution, but if you would like to post one as well, feel free.
Here is a basic example from which anyone should be able to figure out how to play sine waves with their keyboard:
<script type="text/javascript">
//WARNING: VERY LOUD. TURN DOWN YOUR SPEAKERS BEFORE TESTING
// create web audio api context
var audioCtx = new (window.AudioContext || window.webkitAudioContext)();
// create Oscillator node
var oscillator = audioCtx.createOscillator();
oscillator.type = 'sine';
oscillator.frequency.value = 750; // value in hertz
oscillator.connect(audioCtx.destination);
oscillator.start();
//uncomment for fun
// setInterval(changeFreq, 100);
//choose a random interval from a list of consonant ratios
var intervals = [1.0, 0.5, 0.3333, 1.5, 1.3333, 2.0, 2.25]
function changeFreq() {
var intervalIndex = ~~(Math.random() * intervals.length);
var noteFreq = oscillator.frequency.value * intervals[intervalIndex];
//because this is random, make an effort to keep it in comfortable frequency range.
if(noteFreq > 1600)
noteFreq *= 0.5;
else if(noteFreq < 250)
noteFreq *= 2;
oscillator.frequency.value = noteFreq;
}
</script>
<body>
<button onclick="changeFreq()">Change Places!</button>
</body>
Is it possible to find out when a WebAudio oscillator is silent, and then call its stop method?
My reason for asking this is because, if you don't call stop on an oscillator, it hangs around in memory indefinitely. But, oscillators don't have a length or duration, so there's no way to find out if the sound it's producing has finished so that you can call stop when it's done. So I wonder if there's a way to test whether or not the oscillator is producing any audible sound, or is silent?
You can put an analyser between the oscillator and its output:
var size = 2048;
var analyser = audioCtx.createAnalyser();
var data = new Float32Array(size);
analyser.fftSize = size;
theOscillator.connect(analyser);
analyser.connect(theOutput);
var silenceChecker = setInterval(function() {
analyser.getFloatTimeDomainData(data);
for (var i = 0; i < size; ++i) {
if (data[i] !== 0) return;
}
// It is silent.
clearInterval(silenceChecker);
theOscillator.stop();
theOscillator.disconnect();
analyser.disconnect();
}, Math.floor(size / audioCtx.sampleRate * 1000));
Note that this is a dumb algorithm that can only detect pure silence, not whether the oscillator is so quiet that it is effectively silent. For that you need to run significantly more complex algorithm, and probably not even in the main thread.
As you already said, oscillators don't have a length or duration, then you or someone else should tell it to stop.
You can set a timeout and stop oscillator after x seconds or bind the stop action to a button in the interface or a key press/release.