I definitely don't have an idea of how to go about this, but I'm sure someone out there does.
I've got a friend who has a music Streaming Channel
He wants that every time a block of songs ends, a beep sound triggers an audio ID saying hello on a personalized manner depending on the region the listener is in
At that time the original audio stream keeps rolling on the buffer on total silence until
another signal from the original stream turns off the overlaid message and goes on with the next block of music.
This is probably possible with javascript (I hope) I've seen lots of examples but none that meets my need.
Any Ideas out there?
Related
TL;DR - I want to use Javascript to detect every click in a drummer's click track (an mp3 with only beats in it) and then replace these with .wav samples of a different click sound. The drummer click track is not in constant time, so I can't simply detect the BPM and replace samples from that.
I have a task I'd like to achieve with Javascript and the web audio API, but I'm not sure if it's actually possible using either of these....
Basically I regularly use recorded backing tracks for songs and I replace the default click track (the metronome track that a drummer plays along to) with custom click samples (one .wav sample for the first beat of a bar and another sample for the remaining beats in any given bar). Annoyingly many of these drummer click tracks are not in constant time - so do not have a constant BPM from start to finish.
I want to detect every click in a click track (every peak soundwave) and then replace these with the .wav samples, and download the final file as a MP3. Is this possible?
There's no built-in way to do this in WebAudio. You will have to implement your peak detector using a ScriptProcessorNode or AudioWorkletNode. Once you have the location of each peak, you can then schedule your replacement clicks to start playing at the click times. With an OfflineAudioContext, you can get the resulting PCM result. To get a compressed version (probably not mp3), I think you need to use a MediaRecorder.
I'm recreating the children's game "Simon" on Codepen and using the Web Audio API to create tones upon a user click or upon the machine generating a sequence.
When the program starts, I initialize an audio context, then initialize four different oscillators, one for each tone/color. I also start each tone.
let context = new AudioContext();
let audioblue = context.createOscillator();
audioblue.frequency.value = 329.63;
audioblue.type = "sine";
audioblue.start();
// I repeat same code as above for audiored, audiogreen, and audioyellow
I have the program generate a random number (0, 1, 2 or 3, correlating to each quadrant of the board), one number at a time, requiring the player to recreate the full sequence before the program generates the next random number. Every time a new number is added to the sequence, I use setInterval() to "play" the sequence. "Playing" the sequence means both lighting up the quadrant and playing the corresponding tone. For playing the tone, I have the following code (the same is true for audiored, audiogreen, and audioyellow):
audioblue.connect(context.destination);
I then use setInterval() to undo the lighting up and disconnect the audio.
audioblue.disconnect(context.destination);
(There's probably a better way to do this, but as a beginner this is how I was able to figure it out.)
Everything works. The problem is you can hear an unpleasant feedback sound on the connecting and disconnecting of the tone. Is it possible to eliminate this feedback sound so the tone is more pleasant and user-friendly?
Thanks for any help you can offer.
Here's a link to the Codepen: https://codepen.io/lieberscott/pen/aEqaNd
I'm trying to write a nodejs website for streaming html5 video (webm, mp4). But don't know how to know when user finished viewing video. That mean, we need a start point (the point that user start to view video), and an end point(the point that user finished to view video). And we can know how many percents that they are viewed the video.
The video are located at our server.
The video HTML element (which is a subclass of MediaElement) has currentTime and duration properties that you can look at in JavaScript, and also a bunch of events you can listen for, like onplay, onplaying, onpause, ontimeupdate, etc.
https://developer.mozilla.org/en-US/docs/Web/API/HTMLMediaElement
I'm currently helping a friend develop a web application in which I need ~6 audio tracks (all using the same time signature) to continuously loop and stay in sync. To give context, it is a typeface-music pairing application where as soon as a typeface is chosen, the associated audio loop starts playing and as the user keeps picking typefaces, the tracks layer and begin to resemble a song.
I've tried using SoundJS and the Buzz sound library, but I keep running into the same problem: there is always a slight delay between loops. This would be fine if all my audio tracks were the same length, but they aren't, so very quickly things go out of sync.
This seems to be a known problem, but I can't seem to find any answer to how to fix it. I came across Hivenfour's SeamlessLoop 2.0, but - unless I'm using it completely wrong - it doesn't actually seem to work (setting a volume returns an error).
If anyone has experience with this, I would truly appreciate any input! Thanks :)
SoundJS WebAudioPlugin uses a look ahead approach with web audio that will loop seamlessly, which is described here in a what will probably be a very helpful on audio timing.
Also be aware that some compression formats will insert white noise into sounds. I believe mp3 does this. WAV is supported broadly and does not.
As for HTMLAudioPlugin, we loop as smoothly as the browser will allow but it does not have the same precision as WebAudio.
Hope that helps.
I'm making a simple game in Javascript, in which when an object collides with a wall, it plays a "thud" sound. That sound's loudness depends on the object's velocity (higher velocity => louder sound).
The play function:
playSound = function(id, vol) //ID of the sound in the sounds array, volume/loudness of the sound
{
if (vol) //sometimes, I just want to play the sound without worrying about volume
sounds[id].volume = vol;
else
sounds[id].volume = 1;
sounds[id].play();
}
How I call it:
playSound(2, Math.sqrt(p.vx*p.vx + p.vy*p.vy)/self.TV); //self.TV stands for terminal velocity. This calculates the actual speed using the basic Pythagora's theorem and then divides it by self.TV, which results in a number from 0 to self.TV. 2 is the id of the sound I want to play.
In Chrome, things work quite well. In Firefox, though, each time a collision with a wall happens (=> playSound gets called), there's a pause lasting almost half a second! At first, I thought that the issues were at Math.sqrt, but I was wrong. This is how I tested it:
//playSound(2, 1); //2 Is the id of the sound I want to play, and 1 is max loudness
Math.sqrt(p.vx*p.vx + p.vy*p.vy)/self.TV;
Math.sqrt(p.vx*p.vx + p.vy*p.vy)/self.TV;
Math.sqrt(p.vx*p.vx + p.vy*p.vy)/self.TV;
This completely removed the collision lag, and lead me to believe that Math.sqrt isn't causing any problems at all. Just to be sure, though, I did this:
playSound(2, 1); //2 Is the id of the sound I want to play, and 1 is max loudness
//Math.sqrt(p.vx*p.vx + p.vy*p.vy)/self.TV;
//Math.sqrt(p.vx*p.vx + p.vy*p.vy)/self.TV;
//Math.sqrt(p.vx*p.vx + p.vy*p.vy)/self.TV;
And the lag was back! Now I'm sure that playing a sound causes problems. Am I correct? Why is this happening? How do I fix it?
I ran into this same delay issue making a sound when the player fires a weapon. My solution was two-fold:
Play each sound at load time and then pause it immediately. This will allow it to resume playing quickly, rather than playing from scratch. Do this play-pause technique after every play of the sound.
Use a pool of <audio> objects for each sound, rather than a single audio object for each sound type. Instead of just using sounds[id], use a 2D array, accessed with sound[id][count]. Here, sound[id] is a list of audio objects that all have the same sound, and count is the index of current object in use for that sound id. With each call to playSound(id), increment the count associated with that id, so that the next call invokes a different audio object.
I had to use these together, because the play-pause technique does a good job of moving the buffering delay to before the sound needs to be played, but if you need the sound rapidly, you'll still get the delay. This way, the most recently used audio object can "recharge" while another object plays.
Two things that might help you is to either utilize Web workers or to precompute several levels of loudness in advance, which you could also do in the background with worker threads. I'm saying this without going into the peculiarities of the Web audio API or how your computing the sound output, but if you've exhausted all other approaches this might be the next direction you should be focusing on.