Web Audio API: Prevent momentary feedback sound on connect/disconnect - javascript

I'm recreating the children's game "Simon" on Codepen and using the Web Audio API to create tones upon a user click or upon the machine generating a sequence.
When the program starts, I initialize an audio context, then initialize four different oscillators, one for each tone/color. I also start each tone.
let context = new AudioContext();
let audioblue = context.createOscillator();
audioblue.frequency.value = 329.63;
audioblue.type = "sine";
audioblue.start();
// I repeat same code as above for audiored, audiogreen, and audioyellow
I have the program generate a random number (0, 1, 2 or 3, correlating to each quadrant of the board), one number at a time, requiring the player to recreate the full sequence before the program generates the next random number. Every time a new number is added to the sequence, I use setInterval() to "play" the sequence. "Playing" the sequence means both lighting up the quadrant and playing the corresponding tone. For playing the tone, I have the following code (the same is true for audiored, audiogreen, and audioyellow):
audioblue.connect(context.destination);
I then use setInterval() to undo the lighting up and disconnect the audio.
audioblue.disconnect(context.destination);
(There's probably a better way to do this, but as a beginner this is how I was able to figure it out.)
Everything works. The problem is you can hear an unpleasant feedback sound on the connecting and disconnecting of the tone. Is it possible to eliminate this feedback sound so the tone is more pleasant and user-friendly?
Thanks for any help you can offer.
Here's a link to the Codepen: https://codepen.io/lieberscott/pen/aEqaNd

Related

How feasible is it to use the Oscillator.connect() and Oscillator.disconnect() methods to turn on/off sounds in an app built with the Web Audio API?

I've been trying to create an 88 key piano with the Web Audio API. The plan is to run all the 88 oscillators first in the appropriate frequencies and then use Oscillator.connect() and Oscillator.disconnect() methods on the respective oscillators as the piano keys are pressed and released. The state of the AudioContext will be "running" all the time. Now, I have two questions here,
Is this the right way to do it?
I get a clicking noise at the start and end of the sounds when I play them. Why is this happening and how to get rid of it?
PS: The reason for creating a piano like this is to indulge myself in the delight of having created something from scratch. So using prerecorded sounds is not an option.
IF you wanted to do it that way, add a gain node to each oscillator, and then turn the gain off and on, instead of disconnect and reconnect.
That's probably what's causing your clicks and snaps. More below.
BUT... that's still pretty overkill, having 88 oscillators. The standard way keyboards do this is with a limited polyphony.
Create an array of ten oscillators, all hooked to their own gain, each gain hooked to the destination.
Keep track of how many keys are being pressed, and how many oscillators are in use.
keysPressed = {}
// on key down
keysPressed["60"] = nextAvailableOsc()
At any given time there are ten oscillators ready to go, one for each finger. If for some reason you require more, add them dynamically.
The clicking sound is because you're hard disconnecting and reconnecting running oscillators. Use a gain node in between the osc and destination, and turn that on and off.
Also, you might get clicks when changing the values hard such as
gainNode.gain.value = 0
That can create a glitch in the sound stream.. It should be:
gainNode.gain.setValueAtTime(0, ctx.currentTime + 1)
Maybe the + 1 is necessary. There's also setTargetAtTime and rampToAtTime methods that make things even smoother:
https://developer.mozilla.org/en-US/docs/Web/API/AudioParam
An alternative approach is to create the oscillators on demand. When a key is pressed, create one or more oscillators (for harmonics). These can feed into one (or more?) gain nodes which has an automation for the attack and sustain phase. When the key is released, automate the gain for a release phase and schedule the oscillator(s) to stop after the release phase has ended. Drop the reference to all the oscillators now.
I find this easier to reason about than having an array of oscillators, and there's no limit to the polyphony. But this approach generates more garbage that has to be handled eventually by the collector.

Peak detection with the web audio API?

TL;DR - I want to use Javascript to detect every click in a drummer's click track (an mp3 with only beats in it) and then replace these with .wav samples of a different click sound. The drummer click track is not in constant time, so I can't simply detect the BPM and replace samples from that.
I have a task I'd like to achieve with Javascript and the web audio API, but I'm not sure if it's actually possible using either of these....
Basically I regularly use recorded backing tracks for songs and I replace the default click track (the metronome track that a drummer plays along to) with custom click samples (one .wav sample for the first beat of a bar and another sample for the remaining beats in any given bar). Annoyingly many of these drummer click tracks are not in constant time - so do not have a constant BPM from start to finish.
I want to detect every click in a click track (every peak soundwave) and then replace these with the .wav samples, and download the final file as a MP3. Is this possible?
There's no built-in way to do this in WebAudio. You will have to implement your peak detector using a ScriptProcessorNode or AudioWorkletNode. Once you have the location of each peak, you can then schedule your replacement clicks to start playing at the click times. With an OfflineAudioContext, you can get the resulting PCM result. To get a compressed version (probably not mp3), I think you need to use a MediaRecorder.

How to make web app like "Epic Sax Gandalf" using JavaScript?

I want to create an application that when launched on different devices will display the same content (music, photo or video) at the same time.
Here is simple example.
And real-life example. :)
My first idea was based on machine local time.
timestamp = new Date().getTime()
(timestamp(\d{4}$) === "0000") => play music and animation
music = 10s,
animation = 10s
and for every every 10 seconds, start this function.
I know, however, that this solution may not work and the content will still be unsynchronized.
So, does anyone know how to achieve the effect I'm talking about using javascript?
I actually had the same idea as you had and implemented a little proof of concept for it.
You can try it out like this:
Open the deployed site (either in two browsers/windows or even on different devices)
Choose the same unique channel-id for all your opened sites
Click "Start"
One window should have a heading with "Leader". The others should be "Follower"s. By clicking on the Video-id field you can paste the video id of any youtube video (or choose from a small list of recommended ones).
Click "Play" on each window
Wait a bit - it can take up to 1 minute until the videos synchronize
Each follower has a "System time offset". On the same device it should be nearly 0ms. This is the time that the system-time (Date.now()) in the browser differs from the system time on the Leader window.
On the top left of each video you can see a time that changes every few seconds and should be under 20ms (after the videos are synchronized). This is the time the video-feed differs from its optimal time in relation to the system time.
(I would love to know wether it works for you too. My Pusher deployment is EU-based so maybe problems with the increased latency could occur...)
How does it work?
The synchronisation happens in two steps:
Step 1.: Synchronizing the system times
I basically implemented the NTP (Network time protocol) Algorithm in JS via websockets or Pusher JS as my channel of communication between each Follower- and the Leader-clients. Look under "Clock synchronization algorithm" in the Wikipedia article for more information.
Step 2.: Synchronizing the video feed to the "reference time"
At the currentTime (= synchronized system time) we want the currentVideoTimeto be at currentTime % videoLength. Because the currentTime or system time has been synchronized between the clients in Step 1 and the videoLength is obviously the same in all the clients (because they are supposed to play the same video) the currentVideoTime is the same too.
The big problem is that if I would just start the video at the correct time on all clients (via setTimeout()) they probably wouldn't play at the same time, because one system has e.g. network problems and the video still buffers or another program wants in this moment the processing power of the OS. Depending on the device the time from calling the start function of the video player and the actual starting of the video differs too.
I'm solving this by checking every second wether the video is at the right position (= currentTime % videoLength). If the difference to the right position is bigger than 20ms, I'm stopping the video, skipping the video to the position where it should be in 5s + the time it was late before and start it again.
The code is a bit more sophisticated (and complicated) but this is the general idea.
sync-difference-estimator
synchronized-player

Play alternate sound on stream signal or silence

I definitely don't have an idea of how to go about this, but I'm sure someone out there does.
I've got a friend who has a music Streaming Channel
He wants that every time a block of songs ends, a beep sound triggers an audio ID saying hello on a personalized manner depending on the region the listener is in
At that time the original audio stream keeps rolling on the buffer on total silence until
another signal from the original stream turns off the overlaid message and goes on with the next block of music.
This is probably possible with javascript (I hope) I've seen lots of examples but none that meets my need.
Any Ideas out there?

Is playing sound in Javascript performance heavy?

I'm making a simple game in Javascript, in which when an object collides with a wall, it plays a "thud" sound. That sound's loudness depends on the object's velocity (higher velocity => louder sound).
The play function:
playSound = function(id, vol) //ID of the sound in the sounds array, volume/loudness of the sound
{
if (vol) //sometimes, I just want to play the sound without worrying about volume
sounds[id].volume = vol;
else
sounds[id].volume = 1;
sounds[id].play();
}
How I call it:
playSound(2, Math.sqrt(p.vx*p.vx + p.vy*p.vy)/self.TV); //self.TV stands for terminal velocity. This calculates the actual speed using the basic Pythagora's theorem and then divides it by self.TV, which results in a number from 0 to self.TV. 2 is the id of the sound I want to play.
In Chrome, things work quite well. In Firefox, though, each time a collision with a wall happens (=> playSound gets called), there's a pause lasting almost half a second! At first, I thought that the issues were at Math.sqrt, but I was wrong. This is how I tested it:
//playSound(2, 1); //2 Is the id of the sound I want to play, and 1 is max loudness
Math.sqrt(p.vx*p.vx + p.vy*p.vy)/self.TV;
Math.sqrt(p.vx*p.vx + p.vy*p.vy)/self.TV;
Math.sqrt(p.vx*p.vx + p.vy*p.vy)/self.TV;
This completely removed the collision lag, and lead me to believe that Math.sqrt isn't causing any problems at all. Just to be sure, though, I did this:
playSound(2, 1); //2 Is the id of the sound I want to play, and 1 is max loudness
//Math.sqrt(p.vx*p.vx + p.vy*p.vy)/self.TV;
//Math.sqrt(p.vx*p.vx + p.vy*p.vy)/self.TV;
//Math.sqrt(p.vx*p.vx + p.vy*p.vy)/self.TV;
And the lag was back! Now I'm sure that playing a sound causes problems. Am I correct? Why is this happening? How do I fix it?
I ran into this same delay issue making a sound when the player fires a weapon. My solution was two-fold:
Play each sound at load time and then pause it immediately. This will allow it to resume playing quickly, rather than playing from scratch. Do this play-pause technique after every play of the sound.
Use a pool of <audio> objects for each sound, rather than a single audio object for each sound type. Instead of just using sounds[id], use a 2D array, accessed with sound[id][count]. Here, sound[id] is a list of audio objects that all have the same sound, and count is the index of current object in use for that sound id. With each call to playSound(id), increment the count associated with that id, so that the next call invokes a different audio object.
I had to use these together, because the play-pause technique does a good job of moving the buffering delay to before the sound needs to be played, but if you need the sound rapidly, you'll still get the delay. This way, the most recently used audio object can "recharge" while another object plays.
Two things that might help you is to either utilize Web workers or to precompute several levels of loudness in advance, which you could also do in the background with worker threads. I'm saying this without going into the peculiarities of the Web audio API or how your computing the sound output, but if you've exhausted all other approaches this might be the next direction you should be focusing on.

Categories