Im creating my html5 app for testing and Im working with audio api , for generating sound on keyboard I'm doing something like this
keyboard.keyDown(function (note, frequency) {
var oscillator = context.createOscillator(),
gainNode = context.createGainNode();
oscillator.type = 2;
oscillator.frequency.value = frequency;
gainNode.gain.value = 0.3;
oscillator.connect(gainNode);
if (typeof oscillator.noteOn !== 'undefined') {
oscillator.noteOn(0);
}
gainNode.connect(context.destination);
nodes.push(oscillator);
});
now my question is , cause I tryied to find examples on google but with no success ,what are the other parammetars that can be used for getting sound sounds like piano or some electronic instrument except oscillator and how to pass them ?
I'm assuming you are fairly new to synthesis. Before trying synthesis algorithms in code, I'd recommend playing with some of the software synthesizers that are available - VST or otherwise. This will give you a handle on the kind of parameters you want to be introducing into your algorithm. http://www.soundonsound.com/sos/allsynthsecrets.htm is an index for a series of really good synthesis tutorials. (Start at the bottom - part 1!)
Once you are ready to start experimenting in code, a great place to start would be to introduce an envelope to change the volume or pitch of the sound over time (changing a parameter over time like this is called 'modulation'). This video may be of interest: http://www.youtube.com/watch?v=A6pp6OMU5r8
Bear in mind that almost all acoustic instruments are difficult to convincingly synthesize algorithmically, and by far the easiest way to get close to a piano is to use samples of real piano notes.
Related
I'm trying for the first time to use Web Audio API in Javascript.
For a personal project i'm trying to control the volume, but I have some difficulties.
I'm using this git project : https://github.com/kelvinau/circular-audio-wave
In this project I added this function that use in the function play() :
changeVolume() {
const volume = this.context.createGain();
volume.gain.value = 0.1;
volume.connect(this.context.destination)
this.sourceNode.connect(volume)
}
When I set the gain to 0 it doesn't mute the sound. But when i set to 3 its working and the sound is louder.
Do you know why I can increase the volume but I can't lower it ?
From what you are describing it sounds as if there is still a direct connection from the sourceNode to the destination of the context.
It should work if you remove this line from the original example.
this.sourceNode.connect(this.context.destination);
Suppose you use the Web Audio API to play a pure tone:
ctx = new AudioContext();
src = ctx.createOscillator();
src.frequency = 261.63; //Play middle C
src.connect(ctx.destination);
src.start();
But, later on you decide you want to stop the sound:
src.stop();
From this point on, src is now completely useless; if you try to start it again, you get:
src.start()
VM564:1 Uncaught DOMException: Failed to execute 'start' on 'AudioScheduledSourceNode': cannot call start more than once.
at <anonymous>:1:5
If you were making, say, a little online keyboard, you're constantly turning notes on and off. It seems really clunky to remove the old object from the audio nodes graph, create a brand new object, and connect() it into the graph, (and then discard the object later) when it would be simpler to just turn it on and off when needed.
Is there some important reason the Web Audio API does things like this? Or is there some cleaner way of restarting an audio source?
Use connect() and disconnect(). You can then change the values of any AudioNode to change the sound.
(The button is because AudioContext requires a user action to run in Snippet.)
play = () => {
d.addEventListener('mouseover',()=>src.connect(ctx.destination));
d.addEventListener('mouseout',()=>src.disconnect(ctx.destination));
ctx = new AudioContext();
src = ctx.createOscillator();
src.frequency = 261.63; //Play middle C
src.start();
}
div {
height:32px;
width:32px;
background-color:red
}
div:hover {
background-color:green
}
<button onclick='play();this.disabled=true;'>play</button>
<div id='d'></div>
This is exactly how the web audio api works. Sound generator nodes like oscillator nodes and audio buffer source nodes are intended to be used once. Every time you want to play your oscillator, you have to create it and set it up, just like you said. I know it seems like a hassle, but you can abstract it into a play() method that handles those details for you, so you don't have to think about it every time you play an oscillator. Also, don't worry about the performance implications of creating so many nodes. The web audio api is intended to be used this way.
If you just want to make music on the internet, and you're not as interested in learning the ins and outs of the web audio api, you might be interested in using a library I wrote that makes things like this easier: https://github.com/rserota/wad
I am working on a 12 Voice Polyphonic Syntesizer with 2 Osc per Voice.
I now never Stop the Osc's. I disconnect the Osc's. You can do that by setTimeout. For the Time take the longest release Phase (1 of 2) from the amp Enveloop for this Set of Osc's. Subtract the AudioContext.currentTime(), multiply with 1000 (setTimeout works with milisecs, web Audio works with seconds.)
This pen uses the ToneJS library to play pitches on the computer keyboard. However, it can only play one note at a time. How can I code this to play multiple notes at once?
The code:
var keyToPitch = { "z":"C3", "s":"C#3", "x":"D3", "d":"D#3", "c":"E3", "v":"F3", "g":"F#3", "b":"G3", "h":"G#3", "n":"A3", "j":"A#3", "m":"B3", ",":"C4" }
var synth = new Tone.Synth()
synth.oscillator.type = "sawtooth"
synth.toMaster()
window.addEventListener('keydown', this.onkeydown)
window.addEventListener('keyup', this.onkeyup)
function onkeydown(e){
synth.triggerAttack(keyToPitch[e.key], Tone.context.currentTime)
}
function onkeyup(e){
synth.triggerRelease()
}
Oscillators in ToneJS are audio sources, and Master is the output that plays all the inputs connected to it. So to play multiple overlapping sounds, you just create multiple oscillators and connect them all to Master.
For the kind of demo you're linking to, a typical thing to do might be to make a fixed number (5, say) of oscillators, and rotate through them in turn when you want to trigger a sound:
var synthIndex = 0
function startSound(){
synthIndex = (synthIndex + 1) % voices
var synth = synths[synthIndex]
synth.triggerAttack(/* ... */)
}
Or similar. In principle you could make a separate oscillator for every pitch, but this would probably hurt performance in a less trivial demo.
So, I just found out that you can record sound using javascript. That's just awesome!
I intantly created new project to do something on my own. However, as soon as I opened source code of the example script, I found out that there are no explanatory comments at all.
I started googling and found a long and interesting article about AudioContext that doesn't to be aware of the recording at all (it only mentions remixinf sounds) and MDN article, that contains all the information - succesfully hiding the one I'm after.
I'm also aware of existing frameworks that deal with the thing (somehow, maybe). But if I wanted to have a sound recorder I'd download one - but I'm really curious how the thing works.
Now not only that I'm not familiar with the coding part of the thing, I'm also curious how the whole thing will work - do I get intensity in specific time? Much like in any osciloscope?
Or can I already get spectral analysis for the sample?
So, just to avoid any mistakes: Please, could anyone explain the simplest and most straightforward way to get the input data using above-mentioned API and eventually provide a code with explanatory comments?
If you just want to use mic input as a source on WebAudio API, the following code worked for me. It was based on: https://gist.github.com/jarlg/250decbbc50ce091f79e
navigator.getUserMedia = navigator.getUserMedia
|| navigator.webkitGetUserMedia
|| navigator.mozGetUserMedia;
navigator.getUserMedia({video:false,audio:true},callback,console.log);
function callback(stream){
ctx = new AudioContext();
mic = ctx.createMediaStreamSource(stream);
spe = ctx.createAnalyser();
spe.fftSize = 256;
bufferLength = spe.frequencyBinCount;
dataArray = new Uint8Array(bufferLength);
spe.getByteTimeDomainData(dataArray);
mic.connect(spe);
spe.connect(ctx.destination);
draw();
}
I make a new oscillator for each note I play.
function playSound(freq, duration) {
var attack = 5,
decay = duration,
gain = context.createGain(),
osc = context.createOscillator();
gain.connect(context.destination);
gain.gain.setValueAtTime(0, context.currentTime);
gain.gain.linearRampToValueAtTime(0.1, context.currentTime + attack / 1000);
gain.gain.linearRampToValueAtTime(0, context.currentTime + decay / 1000);
osc.frequency.value = freq;
osc.type = "sine";
osc.connect(gain);
osc.start(0);
setTimeout(function() {
osc.stop(0);
osc.disconnect(gain);
gain.disconnect(context.destination);
}, decay)
}
The melody is played in a for loop, where playSound is called. When I click the pause button, I want to silence the melody and pause the for loop so that if I click the play button again, the melody resumes. How do I access all the current oscillators to disconnect them?
You can't, in this code.
1) There is, by design, no introspection of the node graph in the Web Audio API - it enables optimizing garbage collection, and optimizes for large numbers of nodes. Two potential solutions - either maintain a list of playing oscillators, or connect them all to a single gain node (that is, connect their envelope gain node to a "mixer" gain node), and then disconnect (and release references to) that gain node.
2) Not sure what you mean by "pause the for loop" - I presume you have a for loop wrapped around the play note method?
You can suspend the audio context.
const audioCtx = new AudioContext();
audioCtx.suspend();
audioCtx.resume();