So I created the audio:
const jumpy = new Audio();
jumpy.src = "./audio/jump2.wav";
and generated an event listener that triggers the audio:
const cvs = document.getElementById("ghost");
cvs.addEventListener("click", function(evt){
jumpy2.play()
});
the problem is the browser first waits for the audio to play in full (about 1000 ms) before it will play it again but I want the audio to reset every time I click.
How can I go for that?
const jumpy = new Audio();
jumpy.src = "./audio/jump2.wav";
const cvs = document.getElementById("ghost");
cvs.addEventListener("click", function(evt){
jumpy2.play()
});
For short sounds that you want to use multiple times like this, it is better to use the AudioBufferSourceNode in the Web Audio API.
For example:
const buffer = await audioContext.decodeAudioData(/* audio data */);
const bufferSourceNode = audioContext.createBufferSource();
bufferSourceNode.buffer = buffer;
bufferSourceNode.connect(audioContext.destination);
bufferSourceNode.start();
The buffer will be kept in memory, already decoded to PCM and ready to play. Then when you call .start(), it will play right away.
See also: https://stackoverflow.com/a/62355960/362536
Related
What I am trying to do is play multiple audio clips in sequence using AudioContext so that the audio is smooth but I am doing something wrong. Most of the documents I've seen only show how to use a Synthesizer or play 1 audio. How would I go about creating this in plain javascript?
The idea is to:
Create AudioContext
Load the required audio as a buffer
Queue up the audio in a sequence
Let the user play/pause/stop the audio
Let the user control volume and speed
For example:
Audio1 starts at 0 seconds and runs for 5 seconds
Audio2 starts at 5 seconds and runs for 5 seconds
Audio3 starts at 20 seconds and runs for 10 seconds
<button id="AudioLoad">Load</button>
<button id="AudioPlay">Play</button>
<button id="AudioPause">Pause</button>
<button id="AudioStop">Stop</button>
<input id="AudioVolume" type="range" step="0.1" min="0.1" max="1.0" value="0.4">
<script>
const myPlayerLoad = (async () => {
const audioContext = new window.AudioContext();
const gainNode = audioContext.createGain();
// load audio buffer from server
const audioBuffer1 = await fetch('https://assets.mixkit.co/sfx/preview/mixkit-game-show-suspense-waiting-667.mp3').then(r => r.arrayBuffer());
const audioBuffer2 = await fetch('https://assets.mixkit.co/sfx/preview/mixkit-retro-game-emergency-alarm-1000.mp3').then(r => r.arrayBuffer());
const audioBuffer3 = await fetch('https://assets.mixkit.co/sfx/preview/mixkit-trumpet-fanfare-2293.mp3').then(r => r.arrayBuffer());
// create audio context
const audioContext1 = await audioContext.decodeAudioData(audioBuffer1);
const audioContext2 = await audioContext.decodeAudioData(audioBuffer2);
const audioContext3 = await audioContext.decodeAudioData(audioBuffer3);
// create audio source
const audioClip1 = audioContext.createBufferSource();
audioClip1.buffer = audioContext1;
audioClip1.connect(gainNode);
const audioClip2 = audioContext.createBufferSource();
audioClip2.buffer = audioContext2;
audioClip2.connect(gainNode);
const audioClip3 = audioContext.createBufferSource();
audioClip3.buffer = audioContext3;
audioClip3.connect(gainNode);
// connect volume control
gainNode.connect(audioContext.destination);
// play audio at time
audioClip1.noteOn(0);
audioClip2.noteOn(5);
audioClip3.noteOn(20);
// controls
document.getElementById('AudioPlay').addEventListener('click',(clickEvent)=>{ /* code */ });
document.getElementById('AudioPause').addEventListener('click',(clickEvent)=>{ /* code */ });
document.getElementById('AudioStop').addEventListener('click',(clickEvent)=>{ /* code */ });
document.getElementById('AudioVolume').addEventListener('change',(changeEvent)=>{ /* code */ });
});
document.getElementById('AudioLoad').addEventListener('click',(clickEvent)=>myPlayerLoad());
</script>
It's not a problem to play more than one music file, but you will have a better experience if you create a pre-loader that downloads the files and prepares them for use before the rest of the program runs.
Here's a link to one that I created (with help from a great guy!) to load my gaming assets including sound files (which need a bit more prep than the others) file loader
I use it to load and prepare a number of file types. The load() method is what you use load the files. It creates a promise after which is fulfilled allows you to run the rest of your program. It's comprehensively documented throughout and you will see at the top of the file that sound files depend on an imported function makeSound() which you can find in the lib folder along with the assets file the makeSound() function does far more than you need, but you might find some interesting techniques in there. The book from which this was created along with its' author are attributed throughout.
Hope this helps. If you have any questions, don't hesitate to ask.
Currently I'm some streams urls (like this example) which i'm playing normally as new Audio(). Like the code bellow:
const audio = new Audio();
audio.src = 'http://radio.talksport.com/stream?awparams=platform:ts-tunein;lang:en&aw_0_1st.playerid=RadioTime&aw_0_1st.skey=1572173094&aw_0_1st.platform=tunein';
audio.load();
audio.play();
But I'm struggling to add some custom buffer on the given stream, like set a buffer of 2 min. Is possible to do with streaming?
This is a live stream, so no, you cannot set a buffer of 2 minutes.
(Unless of course you want to have the user wait 2 minutes before playing anything back.)
Is there a way to play the audio I am recording while I'm still recording it?
This is my code. I'm recording the audio and everytime an audio chunk is ready a call to onaudioprocess is made. It receives an instance of AudioProcessingEvent. I'm not familiar with the Web Audio API, so I'm not sure what to do to listen to it. I want my speakers to output the sound.
function startRecording(stream){
var context = new AudioContext();
var audio_input = context.createMediaStreamSource(stream);
var buffer_size = 2048;
var recorder = context.createScriptProcessor(buffer_size, 1, 1);
recorder.onaudioprocess = function(e){
// var data = e.inputBuffer.getChannelData(0);
// AudioStream.write(data);
var source = context.createBufferSource();
source.buffer = e.inputBuffer;
source.connect(context.destination);
source.start(0);
console.log(e.inputBuffer)
};
audio_input.connect(recorder);
recorder.connect(context.destination);
}
The code inside onaudioprocess does not work. There is no error in the console, but nothing happens.
The output of e.inputBuffer looks fine:
AudioBuffer {length: 4096, duration: 0.09287981859410431, sampleRate: 44100, numberOfChannels: 1}
Read up on the audio processing event at
https://webaudio.github.io/web-audio-api/#dom-audioprocessingevent to
see what the onprocess event does. You basically need to copy the
events inputBuffer to the outputBuffer and you'll be able to hear the
audio.
I am creating a MediaStream object and adding a video track to it from a canvas using the captureStream() function. This works fine.
However I am trying to add audio as a separate track from a video element. I cant seem to find a way to get an AudioTrack object from a html video element.
Currently HTMLMediaElement.audioTracks is not supported in Chrome. According to the mozilla developer site I should be able to use HTMLMediaElement.captureStream() to return a MediaStream object from which I should be able to retrieve the separate tracks but I just get 'captureStream is not a function' error.
Perhaps i'm missing something very obvious but I would greatly appreciate any help on this.
Below is my current code:
var stream = new MediaStream();
//Works fine for adding video source
var videotracks = myCanvas.captureStream().getTracks();
var videostream = videotracks[0];
stream.addTrack(videostream);
//Currently not supported in Chrome
var audiotracks = myVid.audioTracks;
var audiostream = audiotracks[0];
stream.addTrack(audiostream);
To get an audio stream from a video element in a cross-browser way :
AudioContext API createMediaStreamDestination + createMediaElementSource.
// if all you need is the audio, then you should even probably load your video in an Audio element
var vid = document.createElement('video');
vid.onloadedmetadata = generateAudioStream;
vid.crossOrigin = 'anonymous';
vid.src = 'https://dl.dropboxusercontent.com/s/bch2j17v6ny4ako/movie720p.mp4';
function generateAudioStream() {
var audioCtx = new AudioContext();
// create a stream from our AudioContext
var dest = audioCtx.createMediaStreamDestination();
// connect our video element's output to the stream
var sourceNode = audioCtx.createMediaElementSource(this);
sourceNode.connect(dest)
// start the video
this.play();
// your audio stream
doSomethingWith(dest.stream)
}
function doSomethingWith(audioStream) {
// the audio element that will be shown in the doc
var output = new Audio();
output.srcObject = audioStream;
output.controls = true;
output.play();
document.body.appendChild(output);
}
To add audio to a canvas stream :
MediaStream Capture Canvas and Audio Simultaneously
I am using a SoundCloud URL as audio.src . It is only playing the unprocessed version when i run it through the delay chain i have.
Here is the fiddle:
http://jsfiddle.net/ehsanziya/nwaH3/
var context = new webkitAudioContext();
var audio = new Audio(); //creates a HTML5 Audio Element
url = 'http://api.soundcloud.com/tracks/33925813/stream' + '?client_id=c625af85886c1a833e8fe3d740af753c';
//wraps the soundcloud stream to an audio element.
audio.src = url;
var source = context.createMediaElementSource(audio);
var input = context.createGainNode();
var output = context.createGainNode();
var fb = context.createGainNode();
fb.gain.value = 0.4;
var delay = context.createDelayNode();
delay.delayTime.value = 0.5;
//dry
source.connect(input);
input.connect(output);
//wet
input.connect(delay);
delay.connect(fb);
fb.connect(delay);
delay.connect(output);
source.mediaElement.play();
The chain works with Oscillator node.
What is the reason for it?
And is there any other way of processing a streaming sound from SoundCloud with Web Audio API?
You need to wait for the canplaythrough event on your audio element to fire before you can use it with createMediaElementSource.
So just add the event listener, and wait until the callback fires before you assign source = context.createMediaElementSource(audio); and make all of your connections.
Here's an updated jsFiddle that'll do what you want: http://jsfiddle.net/nwaH3/3/