What I am trying to do is play multiple audio clips in sequence using AudioContext so that the audio is smooth but I am doing something wrong. Most of the documents I've seen only show how to use a Synthesizer or play 1 audio. How would I go about creating this in plain javascript?
The idea is to:
Create AudioContext
Load the required audio as a buffer
Queue up the audio in a sequence
Let the user play/pause/stop the audio
Let the user control volume and speed
For example:
Audio1 starts at 0 seconds and runs for 5 seconds
Audio2 starts at 5 seconds and runs for 5 seconds
Audio3 starts at 20 seconds and runs for 10 seconds
<button id="AudioLoad">Load</button>
<button id="AudioPlay">Play</button>
<button id="AudioPause">Pause</button>
<button id="AudioStop">Stop</button>
<input id="AudioVolume" type="range" step="0.1" min="0.1" max="1.0" value="0.4">
<script>
const myPlayerLoad = (async () => {
const audioContext = new window.AudioContext();
const gainNode = audioContext.createGain();
// load audio buffer from server
const audioBuffer1 = await fetch('https://assets.mixkit.co/sfx/preview/mixkit-game-show-suspense-waiting-667.mp3').then(r => r.arrayBuffer());
const audioBuffer2 = await fetch('https://assets.mixkit.co/sfx/preview/mixkit-retro-game-emergency-alarm-1000.mp3').then(r => r.arrayBuffer());
const audioBuffer3 = await fetch('https://assets.mixkit.co/sfx/preview/mixkit-trumpet-fanfare-2293.mp3').then(r => r.arrayBuffer());
// create audio context
const audioContext1 = await audioContext.decodeAudioData(audioBuffer1);
const audioContext2 = await audioContext.decodeAudioData(audioBuffer2);
const audioContext3 = await audioContext.decodeAudioData(audioBuffer3);
// create audio source
const audioClip1 = audioContext.createBufferSource();
audioClip1.buffer = audioContext1;
audioClip1.connect(gainNode);
const audioClip2 = audioContext.createBufferSource();
audioClip2.buffer = audioContext2;
audioClip2.connect(gainNode);
const audioClip3 = audioContext.createBufferSource();
audioClip3.buffer = audioContext3;
audioClip3.connect(gainNode);
// connect volume control
gainNode.connect(audioContext.destination);
// play audio at time
audioClip1.noteOn(0);
audioClip2.noteOn(5);
audioClip3.noteOn(20);
// controls
document.getElementById('AudioPlay').addEventListener('click',(clickEvent)=>{ /* code */ });
document.getElementById('AudioPause').addEventListener('click',(clickEvent)=>{ /* code */ });
document.getElementById('AudioStop').addEventListener('click',(clickEvent)=>{ /* code */ });
document.getElementById('AudioVolume').addEventListener('change',(changeEvent)=>{ /* code */ });
});
document.getElementById('AudioLoad').addEventListener('click',(clickEvent)=>myPlayerLoad());
</script>
It's not a problem to play more than one music file, but you will have a better experience if you create a pre-loader that downloads the files and prepares them for use before the rest of the program runs.
Here's a link to one that I created (with help from a great guy!) to load my gaming assets including sound files (which need a bit more prep than the others) file loader
I use it to load and prepare a number of file types. The load() method is what you use load the files. It creates a promise after which is fulfilled allows you to run the rest of your program. It's comprehensively documented throughout and you will see at the top of the file that sound files depend on an imported function makeSound() which you can find in the lib folder along with the assets file the makeSound() function does far more than you need, but you might find some interesting techniques in there. The book from which this was created along with its' author are attributed throughout.
Hope this helps. If you have any questions, don't hesitate to ask.
Related
So I created the audio:
const jumpy = new Audio();
jumpy.src = "./audio/jump2.wav";
and generated an event listener that triggers the audio:
const cvs = document.getElementById("ghost");
cvs.addEventListener("click", function(evt){
jumpy2.play()
});
the problem is the browser first waits for the audio to play in full (about 1000 ms) before it will play it again but I want the audio to reset every time I click.
How can I go for that?
const jumpy = new Audio();
jumpy.src = "./audio/jump2.wav";
const cvs = document.getElementById("ghost");
cvs.addEventListener("click", function(evt){
jumpy2.play()
});
For short sounds that you want to use multiple times like this, it is better to use the AudioBufferSourceNode in the Web Audio API.
For example:
const buffer = await audioContext.decodeAudioData(/* audio data */);
const bufferSourceNode = audioContext.createBufferSource();
bufferSourceNode.buffer = buffer;
bufferSourceNode.connect(audioContext.destination);
bufferSourceNode.start();
The buffer will be kept in memory, already decoded to PCM and ready to play. Then when you call .start(), it will play right away.
See also: https://stackoverflow.com/a/62355960/362536
I'm developing a game using javascript and other web technologies. In it, there's a game mode that is basically a tower defense, in which multiple objects may need to make use of the same audio file(.ogg) at the same time. Loading a file and creating a new webaudio for each one of those lags it too much, even if I attempt to stream it instead of a simple sync read, and if I create and save a webaudio in a variable to use multiple times, each time its playing and there is a new request to play said audio, the one that was playing will stop to allow for the new one to play(so, with enough of those, nothing plays at all).
With those issues, I decided to make copies of said webaudio object each time it was gonna be played, but its not only slow to do so, but also creates a minor memory leak(at least the way I did it).
How can I properly cache a webaudio for re-use? Consider that I'm pretty sure I'll need a new one each time because each audio has a position, and thus each of them will play differently, based on player position in relation to object that is playing the audio
You tagged your question with web-audio-api, but from the body of this question, it seems you are using an HTMLMediaElement <audio> instead of the Web Audio API.
So I'll invite you to do the transition to that Web Audio API.
From there you'll be able to decode once your audio file, keep only once the decoded data as an AudioBuffer, and create many readers that will all hook to that one and only AudioBuffer, without eating any more memory.
const btn = document.querySelector("button")
const context = new AudioContext();
// a GainNode to control the output volume of our audio
const volumeNode = context.createGain();
volumeNode.gain.value = 0.5; // from 0 to 1
volumeNode.connect(context.destination);
fetch("https://dl.dropboxusercontent.com/s/agepbh2agnduknz/camera.mp3")
// get the resource as an ArrayBuffer
.then((resp) => resp.arrayBuffer())
// decode the Audio data from this resource
.then((buffer) => context.decodeAudioData(buffer))
// now we have our AudioBuffer object, ready to be played
.then((audioBuffer) => {
btn.onclick = (evt) => {
// allowing an AudioContext to make noise
// must be required from an user-gesture
if (context.status === "suspended") {
context.resume();
}
// a very light player object
const source = context.createBufferSource();
// a simple pointer to the big AudioBuffer (no copy)
source.buffer = audioBuffer;
// connect to our volume node, itself connected to audio output
source.connect(volumeNode);
// start playing now
source.start(0);
};
// now you can spam the button!
btn.disabled = false;
})
.catch(console.error);
<button disabled>play</button>
I'm writing an Electron app to deliver separate audio streams to 10 audio channels, using a Focusrite Scarlett 18i20 USB sound card. Windows 10 splits the outputs into the following stereo outputs:
Focusrite 1 + 2
Focusrite 3 + 4
Focusrite 5 + 6
Focusrite 7 + 8
Focusrite 9 + 10
Because of this, I need the app to send the audio to a specific output, as well as splitting the stereo channels. Example: To deliver audio to the 3rd output, I need to send it to "Focusrite 3 + 4" on the left channel. Unfortunately, I cannot seem to be able to do both at the same time.
I start with an audio object:
let audio = new Audio("https://test.com/test.mp3");
I do the following to get the sinkIds for the outputs:
let devices = await navigator.mediaDevices.enumerateDevices();
devices = devices.filter(device => device.kind === 'audiooutput');
The following works for making the audio output to a specific sinkId:
audio.setSinkId(sinkId).then(() => {
audio.play();
}
Works: I do the following to play only the left channel:
let audioContext = new AudioContext();
let source = audioContext.createMediaElementSource(audio);
let panner = audioContext.createStereoPanner();
panner.pan.value = -1;
source.connect(panner);
panner.connect(audioContext.destination);
So far everything is fine. But when I try to combine these, the sinkId is ignored, and the audio is being sent to the default audio output. I have tried several approaches, including this one:
audio.setSinkId(sinkId).then(() => {
let audioContext = new AudioContext();
let source = audioContext.createMediaElementSource(audio);
let panner = audioContext.createStereoPanner();
panner.pan.value = -1;
source.connect(panner);
panner.connect(audioContext.destination);
}
I have also tried an approach using audioContext.createChannelMerger instead of the stereoPanner. This works perfectly on its own, but not combined with setSinkId. I get the same behavior on Windows 10 and Mac.
Any ideas?
To route the audio output of an AudioContext to a specific output device you would need to use a MediaStreamAudioDestinationNode in combination with another audio element. The sinkId of your existing audio element doesn't have any effect anymore once it gets routed into the AudioContext.
The desired signal flow would somehow look like this:
audio
↓
MediaElementAudioSourceNode
↓
StereoPannerNode
↓
MediaStreamAudioDestinationNode
↓
outputAudio
Your AudioContext code would then need to be changed to route everything to a MediaStreamAudioDestinationNode instead of the default destination.
let audioContext = new AudioContext();
let source = audioContext.createMediaElementSource(audio);
let panner = audioContext.createStereoPanner();
let destination = audioContext.createMediaStreamDestination();
panner.pan.value = -1;
source.connect(panner).connect(destination);
The last part is to route the stream of the MediaStreamAudioDestinationNode to a newly created audio element which can then be used to set the sinkId.
const outputAudio = new Audio();
outputAudio.srcObject = destination.stream;
outputAudio.setSinkId(sinkId);
outputAudio.play();
Please note that using an audio element just for setting the sinkId is a bit of a hack. There are plans to make setting the sinkId a feature of the AudioContext.
https://github.com/WebAudio/web-audio-api-v2/issues/10
I'm building a wheel of fortune in html+js that spins rather quickly. Every time a new color flies by the mark, the wheel should play a click-sound. At top speed this sounds almost like a machine gun, so a new file starts playing before the old one is finished basically. The file itself is always the same: click.wav
It works fine in Chrome, only in chrome. Firefox has a weird bug, where it only plays the sound, if there is any other audio source active, such as a youtube video playing in a different tab. Edge and Safari kinda safe up the clicks to the end and then play them all simultaniously. It's a mess...
I use the method described here which uses cloning an <audio> tag
I guess this is where the problem is:
var sound = new Audio("sounds/click.wav");
sound.preload = 'auto';
sound.load();
function playsound(){
var click=sound.cloneNode();
click.volume=1;
click.play();
}
Here is a simplified version of my spinning function that just calls the playsound() function several times per second:
function rotateWheel(){
angle = angle + acceleration
while (angle >= 360) {
angle = angle - 360
}
var wheel = document.getElementById("wheel")
wheel.style.transform = "rotate("+angle +"deg)"
// play the click when a new segment rotates by
if(Math.floor(angle/21) != previousSegment){
playsound()
previousSegment = Math.floor(angle/21)
}
You used an answer from here this methods cause at some point to crash the browser process because you either create a memory issue or you fill up the DOM with elements the browser has to handle - so you should re-think your approach AND as you found out it will not work for heavy use in most browsers like safari or FireFox
Looking deeper into the <audio> tag specification, it becomes clear that there are many things that simply can't be done with it, which isn't surprising, since it was designed for media playback.
One of the limitations includes -> No fine-grained timing of sound.
So you have to find another method for what you want we use Web Audio API designed for online video games.
Web Audio API
An AudioContext is for managing and playing all sounds. To produce a sound using the Web Audio API, create one or more sound sources and connect them to the sound destination provided by the AudioContext instance (usually the speaker).
The AudioBuffer
With the Web Audio API, audio files can be played only after they’ve been loaded into a buffer. Loading sounds takes time, so assets that are used in the animation/game should be loaded on page load, at the start of the game or level, or incrementally while the player is playing.
The basic steps
We use an XMLHttpRequest to load data into a buffer from an audio file.
Next, we make an asynchronous callback and send the actual request to load.
Once a sound has been buffered and decoded, it can be triggered instantly.
Each time it is triggered, a different instance of the buffered sound is created.
A key feature of sound effects in games is that there can be many of them simultaneously.
So to take your example of the "machine gun": Imagine you're in the middle of a gunfight a shooting machine gun.
The machine gun fires many times per second, causing tens of sound effects to be played at the same time. This is where Web Audio API really shines.
A simple example for your application:
/* global AudioContext:true,
*/
var clickingBuffer = null;
// Fix up prefixing
window.AudioContext = window.AudioContext || window.webkitAudioContext;
var context = new AudioContext();
function loadClickSound(url) {
var request = new XMLHttpRequest();
request.open('GET', url, true);
request.responseType = 'arraybuffer';
// Decode asynchronously
request.onload = function() {
context.decodeAudioData(request.response, function(buffer) {
if (!buffer) {
console.log('Error decoding file data: ' + url);
return;
}
clickingBuffer = buffer;
});
request.onerror = function() {
console.log('BufferLoader: XHR error');
};
request.send();
};
}
function playSound(buffer, time, volume) {
var source = context.createBufferSource(); // creates a sound source
source.buffer = buffer; // tell the source which sound to play
source.connect(context.destination); // connect the source to the context's destination (the speakers)
var gainNode = context.createGain(); // Create a gain node
source.connect(gainNode); // Connect the source to the gain node
gainNode.connect(context.destination); // Connect the gain node to the destination
gainNode.gain.value = volume; // Set the volume
source.start(time); // play the source at the deisred time 0=now
}
// You call with in your document ready
loadClickSound('sounds/click.wav');
//and this plays the sound
playSound(clickingBuffer, 0, 1);
Now you can play around with different timings and volume variations for example by intoducing a random factor
If you need a more complex solution with different clicking sounds (stored in a buffer array) and volume/ distance variations this would be a longer piece of code.
I've been working on using the html audio tag to play some audio files. The audio plays alright, but the duration property of the audio tag is always returning infinity.
I tried the accepted answer to this question but with the same result. Tested with Chrome, IE and Firefox.
Is this a bug with the audio tag, or am I missing something?
Some of the code I'm using to play the audio files.
javascript function when playbutton is pressed
function playPlayerV2(src) {
document.getElementById("audioplayerV2").addEventListener("loadedmetadata", function (_event) {
console.log(player.duration);
});
var player = document.getElementById("audioplayer");
player.src = "source";
player.load();
player.play();
}
the audio tag in html
<audio controls="true" id="audioplayerV2" style="display: none;" preload="auto">
note: I'm hiding the standard audio player with the intend of using custom layout and make use of the player via javascript, this does not seem to be related to my problem.
try this
var getDuration = function (url, next) {
var _player = new Audio(url);
_player.addEventListener("durationchange", function (e) {
if (this.duration!=Infinity) {
var duration = this.duration
_player.remove();
next(duration);
};
}, false);
_player.load();
_player.currentTime = 24*60*60; //fake big time
_player.volume = 0;
_player.play();
//waiting...
};
getDuration ('/path/to/audio/file', function (duration) {
console.log(duration);
});
I think this is due to a chrome bug. Until it's fixed:
if (video.duration === Infinity) {
video.currentTime = 10000000;
setTimeout(() => {
video.currentTime = 0; // to reset the time, so it starts at the beginning
}, 1000);
}
let duration = video.duration;
This works for me
const audio = document.getElementById("audioplayer");
audio.addEventListener('loadedmetadata', () => {
if (audio.duration === Infinity) {
audio.currentTime = 1e101
audio.addEventListener('timeupdate', getDuration)
}
})
function getDuration() {
audio.currentTime = 0
this.voice.removeEventListener('timeupdate', getDuration)
console.log(audio.duration)
},
In case you control the server and can make it to send proper media header - this what helped the OP.
I faced this problem with files stored in Google Drive when getting them in Mobile version of Chrome. I cannot control Google Drive response and I have to somehow deal with it.
I don't have a solution that satisfies me yet, but I tried the idea from both posted answers - which basically is the same: make audio/video object to seek the real end of the resource. After Chrome finds the real end position - it gives you the duration. However the result is unsatisfying.
What this hack really makes - it forces Chrome to load the resource into the memory completely. So, if the resource is too big, or connection is too slow you end up waiting a long time for the file to be downloaded behind the scenes. And you have no control over that file - it is handled by Chrome and once it decides that it is no longer needed - it will dispose it, so the bandwidth may be spent ineficciently.
So, in case you can load the file yourself - it is better to download it (e.g. as blob) and feed it to your audio/video control.
If this is a Twilio mp3, try the .wav version. The mp3 is coming across as a stream and it fools the audio players.
To use the .wav version, just change the format of the source url from .mp3 to .wav (or leave it off, wav is the default)
Note - the wav file is 4x larger, so that's the downside to switching.
Not a direct answer but in case anyone using blobs came here, I managed to fix it using a package called webm-duration-fix
import fixWebmDuration from "webm-duration-fix";
...
fixedBlob = await fixWebmDuration(blob);
...
//If you want to modify the video file completely, you can use this package "webmFixDuration" Other methods are applied at the display level only on the video tag With this method, the complete video file is modified
webmFixDuration github example
mediaRecorder.onstop = async () => {
const duration = Date.now() - startTime;
const buggyBlob = new Blob(mediaParts, { type: 'video/webm' });
const fixedBlob = await webmFixDuration(buggyBlob, duration);
displayResult(fixedBlob);
};