I'm building a simple synthesizer with WebMIDI control. The gain node has no effect on the oscillator, it's at full volume the entire time. Also when I play chords the frequencies are correct but there is a wobbling and screeching effect. The problems are present when playing with my MIDI controller and when starting and stopping the synthesizer using the console.
Here's my synthesizer code:
var synth = {
voices: {},
start: function (note, vol) {
this.voices[note] = {
gain: audio.createGain(),
osc: audio.createOscillator()
}
this.voices[note].gain.connect(audio.destination);
this.voices[note].osc.frequency.value = noteToFreq(note);
this.voices[note].osc.connect(this.voices[note].gain);
this.voices[note].osc.start(0);
this.voices[note].gain.gain.setTargetAtTime(vol, audio.currentTime, 0.5);
},
stop: function (note) {
this.voices[note].gain.gain.setTargetAtTime(0, audio.currentTime, 2);
this.voices[note].osc.stop(audio.currentTime + 2);
}
}
Oscillators are full-range - i.e. [-1,+1]. When you sum two signals (e.g. connect them to the same output node - they're in the range of [-2,+2], which will clip some of the time. Run them through a gain node with value=0.5 and see if it eliminates the problem. (Ideally, you'd drop the gain a little bit and run them through a compressor/limiter.)
Related
I'm trying for the first time to use Web Audio API in Javascript.
For a personal project i'm trying to control the volume, but I have some difficulties.
I'm using this git project : https://github.com/kelvinau/circular-audio-wave
In this project I added this function that use in the function play() :
changeVolume() {
const volume = this.context.createGain();
volume.gain.value = 0.1;
volume.connect(this.context.destination)
this.sourceNode.connect(volume)
}
When I set the gain to 0 it doesn't mute the sound. But when i set to 3 its working and the sound is louder.
Do you know why I can increase the volume but I can't lower it ?
From what you are describing it sounds as if there is still a direct connection from the sourceNode to the destination of the context.
It should work if you remove this line from the original example.
this.sourceNode.connect(this.context.destination);
This pen uses the ToneJS library to play pitches on the computer keyboard. However, it can only play one note at a time. How can I code this to play multiple notes at once?
The code:
var keyToPitch = { "z":"C3", "s":"C#3", "x":"D3", "d":"D#3", "c":"E3", "v":"F3", "g":"F#3", "b":"G3", "h":"G#3", "n":"A3", "j":"A#3", "m":"B3", ",":"C4" }
var synth = new Tone.Synth()
synth.oscillator.type = "sawtooth"
synth.toMaster()
window.addEventListener('keydown', this.onkeydown)
window.addEventListener('keyup', this.onkeyup)
function onkeydown(e){
synth.triggerAttack(keyToPitch[e.key], Tone.context.currentTime)
}
function onkeyup(e){
synth.triggerRelease()
}
Oscillators in ToneJS are audio sources, and Master is the output that plays all the inputs connected to it. So to play multiple overlapping sounds, you just create multiple oscillators and connect them all to Master.
For the kind of demo you're linking to, a typical thing to do might be to make a fixed number (5, say) of oscillators, and rotate through them in turn when you want to trigger a sound:
var synthIndex = 0
function startSound(){
synthIndex = (synthIndex + 1) % voices
var synth = synths[synthIndex]
synth.triggerAttack(/* ... */)
}
Or similar. In principle you could make a separate oscillator for every pitch, but this would probably hurt performance in a less trivial demo.
I make a new oscillator for each note I play.
function playSound(freq, duration) {
var attack = 5,
decay = duration,
gain = context.createGain(),
osc = context.createOscillator();
gain.connect(context.destination);
gain.gain.setValueAtTime(0, context.currentTime);
gain.gain.linearRampToValueAtTime(0.1, context.currentTime + attack / 1000);
gain.gain.linearRampToValueAtTime(0, context.currentTime + decay / 1000);
osc.frequency.value = freq;
osc.type = "sine";
osc.connect(gain);
osc.start(0);
setTimeout(function() {
osc.stop(0);
osc.disconnect(gain);
gain.disconnect(context.destination);
}, decay)
}
The melody is played in a for loop, where playSound is called. When I click the pause button, I want to silence the melody and pause the for loop so that if I click the play button again, the melody resumes. How do I access all the current oscillators to disconnect them?
You can't, in this code.
1) There is, by design, no introspection of the node graph in the Web Audio API - it enables optimizing garbage collection, and optimizes for large numbers of nodes. Two potential solutions - either maintain a list of playing oscillators, or connect them all to a single gain node (that is, connect their envelope gain node to a "mixer" gain node), and then disconnect (and release references to) that gain node.
2) Not sure what you mean by "pause the for loop" - I presume you have a for loop wrapped around the play note method?
You can suspend the audio context.
const audioCtx = new AudioContext();
audioCtx.suspend();
audioCtx.resume();
I've built this HTML5 video player that I am loading into a canvas to manipulate and back onto a canvas to display it. The video starts out quite slow and the frame rate only gets worse each time it is played. All I am currently manipulating in the video now is the color value when the video is paused, but will eventually be using real time manipulation throughout videos that will be posted in the future.
I used the below tutorial to learn this trick https://www.youtube.com/watch?v=zjQzP3mOXdc
Here is the relevant code, but there may possibly be interference coming from elsewhere so feel free to check the source code at the link at the bottom
var v = document.getElementById('video');
var color = "#DA7AC1";
var processes={
timerCallback:function() {
if (this.v2.paused || this.v2.ended) {
return;
}
this.ctxIn.drawImage(this.v2,0,0,this.width,this.height);
this.pixelScan();
var self=this;
setTimeout(function() {
self.timerCallback();
}, 0);
},
doLoad:function(){
this.v2=document.getElementById("video");
this.cIn=document.getElementById("cIn");
this.ctxIn=this.cIn.getContext("2d");
this.cOut=document.getElementById("cOut");
this.ctxOut=this.cOut.getContext("2d");
var self=this;
this.v2.addEventListener("playing", function() {
self.width=self.v2.videoWidth;
self.height=self.v2.videoHeight;
cIn.width=self.v2.videoWidth;
cIn.height=self.v2.videoHeight;
cOut.width=self.v2.videoWidth;
cOut.height=self.v2.videoHeight;
self.timerCallback();
}, false);
},
pixelScan: function() {
var frame = this.ctxIn.getImageData(0,0,this.width,this.height);
for(var i=0; i<frame.data.length;i+=4) {
var grayscale=frame.data[i]*.3+frame.data[i+1]*.59+frame.data[i+2]*.11;
frame.data[i]=grayscale;
frame.data[i+1]=grayscale;
frame.data[i+2]=grayscale;
}
this.ctxOut.putImageData(frame,0,0);
return;
}
}
http://coreytegeler.com/ethan/
Any and all help would be greatly appreciated! Thanks!
Reason 1
Try to adjust your timer avoiding 0 as timeout value:
setTimeout(function() {
self.timerCallback();
}, 34);
34ms is plenty as video frame rate is typically never more than 30 FPS (NTSC) or 25 FPS (PAL), ie 1000 / 30. If you use 0 you risk stacking up your calls which means the browser will be busy trying to empty the event queue.
If you use anything lower than 33-34ms you end up having the same frame processed twice or more which of course is unnecessary (your video is actually 29.97 FPS/NTSC so you might want to consider keeping 34ms).
Reason 2
The video resolution is also full HD (1920x1080) which is a bit too much for canvas and JS to process in real-time (for a typcial consumer computer). Try to reduce the video size so a normal spec'ed computer will be able to process the data.
Reason 3 (in part)
You don't need two on-screen canvases or even an on-screen video. Try to create these tags dynamically and not inserting them into the DOM. Use a single canvas on-screen and draw the result to that (you can putImageData from one canvas to another).
Reason 4 (in part)
Ideally, replace setTimeout with a requestAnimationFrame approach as this improves the synchronization and efficiency considerably. You can implement a toggle to reduce the FPS to for example 30 as you don't need to process each frame twice (ref. 30 FPS video frame rate).
Update
To create these elements dynamically (ref reason 3) you can do something like this:
var canvas = document.createElement('canvas'),
video = document.createElement('video'),
ctx = canvas.getContext('2d');
video.preload = 'auto';
video.addEventListener('canplay', start, false);
if (video.canPlayType('video/mp4')) {
video.src = 'videoUrl.mp4';
} else if ...etc.
Then when the video has loaded enough data (on metadata or canplay) you set the off-screen (and on-screen) canvas element to the size of the video:
canvas.width = video.videoWidth;
canvas.height = video.videoHeight;
Then when playing process its buffer and copy to the on-screen canvas you defined before.
You don't have have an off-screen canvas - I merely mention this as you in your original code used and in and out canvas IIRC. You can simply use a single on-screen canvas and the off-screen video and draw to the video frame to the canvas, process it and put back the processed data. Should work fine too in this case.
I ran a profile in chrome and it points to line 46 as taking up the most CPU.
setTimeout(function() {
self.timerCallback();
}, 0);
Perhaps increasing the timeout will stop it from lagging.
I had the same issues and tried a number of fixes. I was using Premier Elements which didn't export to mp4 and using HandBrake to convert the format. I also Tried FFMpeg to do the conversion, but neither worked.
What I did was switch to Kdenlive as my video editor, it exported directly to MP4, and that video worked perfectly.
So, if you are have this slow render issue, it is probably an issues with the video encoding. Easiest fix is to get a high quality video editor like Premier Pro, Final Cut, or Kdenlive. Kdenlive is free but it has a huge learning curve and poor public documentation.
Im creating my html5 app for testing and Im working with audio api , for generating sound on keyboard I'm doing something like this
keyboard.keyDown(function (note, frequency) {
var oscillator = context.createOscillator(),
gainNode = context.createGainNode();
oscillator.type = 2;
oscillator.frequency.value = frequency;
gainNode.gain.value = 0.3;
oscillator.connect(gainNode);
if (typeof oscillator.noteOn !== 'undefined') {
oscillator.noteOn(0);
}
gainNode.connect(context.destination);
nodes.push(oscillator);
});
now my question is , cause I tryied to find examples on google but with no success ,what are the other parammetars that can be used for getting sound sounds like piano or some electronic instrument except oscillator and how to pass them ?
I'm assuming you are fairly new to synthesis. Before trying synthesis algorithms in code, I'd recommend playing with some of the software synthesizers that are available - VST or otherwise. This will give you a handle on the kind of parameters you want to be introducing into your algorithm. http://www.soundonsound.com/sos/allsynthsecrets.htm is an index for a series of really good synthesis tutorials. (Start at the bottom - part 1!)
Once you are ready to start experimenting in code, a great place to start would be to introduce an envelope to change the volume or pitch of the sound over time (changing a parameter over time like this is called 'modulation'). This video may be of interest: http://www.youtube.com/watch?v=A6pp6OMU5r8
Bear in mind that almost all acoustic instruments are difficult to convincingly synthesize algorithmically, and by far the easiest way to get close to a piano is to use samples of real piano notes.