Does MediaElementSource uses less memory than BufferSource in Web Audio API? - javascript

I am making a little app that will play audio files (mp3,wav) with the ability to use an equalizer on them (say a regular Audio Player), for this I am using the Web Audio Api.
I manage to get the play part in two ways. Using decodeAudioData of BaseAudioContext
function getData() {
source = audioCtx.createBufferSource();
var request = new XMLHttpRequest();
request.open('GET', 'viper.ogg', true);
request.responseType = 'arraybuffer';
request.onload = function() {
var audioData = request.response;
audioCtx.decodeAudioData(audioData, function(buffer) {
source.buffer = buffer;
source.connect(audioCtx.destination);
source.loop = true;
},
function(e){ console.log("Error with decoding audio data" + e.err); });
}
request.send();
}
// wire up buttons to stop and play audio
play.onclick = function() {
getData();
source.start(0);
play.setAttribute('disabled', 'disabled');
}
and and much easier way with Audio() and createMediaElementSource()
let audioContainer = new Audio('assets/mp3/pink_noise.wav');
let _sourceNodes = _AudioContext.createMediaElementSource(_audioContainer);
_sourceNodes.connect(_AudioContext.destination);
_audioContainer.play();
I think the second one use less memory than createBufferSource() because createBufferSource() stores the complete audio file in memory. But I am not sure about this I really do not have to much experience with tools like Chrome Dev tools to read it correctly.
Does createMediaElementSource() use less memory than createBufferSource() ?
Edit:
Using Chrome's Task Manager seems like when using createBufferSource() just loading the file increases the Memory column something around 40000k against +/-60k with createMediaElementSource(), and the Javascript Memory 1000k vs 20k

I think you've found the answer in the task manager.
You need to be aware of a couple of things.
With a media element, you lose sample-accurate control; this may not be important to you
You need appropriate access permissions when using a MediaElementAudioSourceNode; this may not be a problem if all of your assets are on the same server

Related

How to rapidly play multiple copies of a soundfile in javascript

I'm building a wheel of fortune in html+js that spins rather quickly. Every time a new color flies by the mark, the wheel should play a click-sound. At top speed this sounds almost like a machine gun, so a new file starts playing before the old one is finished basically. The file itself is always the same: click.wav
It works fine in Chrome, only in chrome. Firefox has a weird bug, where it only plays the sound, if there is any other audio source active, such as a youtube video playing in a different tab. Edge and Safari kinda safe up the clicks to the end and then play them all simultaniously. It's a mess...
I use the method described here which uses cloning an <audio> tag
I guess this is where the problem is:
var sound = new Audio("sounds/click.wav");
sound.preload = 'auto';
sound.load();
function playsound(){
var click=sound.cloneNode();
click.volume=1;
click.play();
}
Here is a simplified version of my spinning function that just calls the playsound() function several times per second:
function rotateWheel(){
angle = angle + acceleration
while (angle >= 360) {
angle = angle - 360
}
var wheel = document.getElementById("wheel")
wheel.style.transform = "rotate("+angle +"deg)"
// play the click when a new segment rotates by
if(Math.floor(angle/21) != previousSegment){
playsound()
previousSegment = Math.floor(angle/21)
}
You used an answer from here this methods cause at some point to crash the browser process because you either create a memory issue or you fill up the DOM with elements the browser has to handle - so you should re-think your approach AND as you found out it will not work for heavy use in most browsers like safari or FireFox
Looking deeper into the <audio> tag specification, it becomes clear that there are many things that simply can't be done with it, which isn't surprising, since it was designed for media playback.
One of the limitations includes -> No fine-grained timing of sound.
So you have to find another method for what you want we use Web Audio API designed for online video games.
Web Audio API
An AudioContext is for managing and playing all sounds. To produce a sound using the Web Audio API, create one or more sound sources and connect them to the sound destination provided by the AudioContext instance (usually the speaker).
The AudioBuffer
With the Web Audio API, audio files can be played only after they’ve been loaded into a buffer. Loading sounds takes time, so assets that are used in the animation/game should be loaded on page load, at the start of the game or level, or incrementally while the player is playing.
The basic steps
We use an XMLHttpRequest to load data into a buffer from an audio file.
Next, we make an asynchronous callback and send the actual request to load.
Once a sound has been buffered and decoded, it can be triggered instantly.
Each time it is triggered, a different instance of the buffered sound is created.
A key feature of sound effects in games is that there can be many of them simultaneously.
So to take your example of the "machine gun": Imagine you're in the middle of a gunfight a shooting machine gun.
The machine gun fires many times per second, causing tens of sound effects to be played at the same time. This is where Web Audio API really shines.
A simple example for your application:
/* global AudioContext:true,
*/
var clickingBuffer = null;
// Fix up prefixing
window.AudioContext = window.AudioContext || window.webkitAudioContext;
var context = new AudioContext();
function loadClickSound(url) {
var request = new XMLHttpRequest();
request.open('GET', url, true);
request.responseType = 'arraybuffer';
// Decode asynchronously
request.onload = function() {
context.decodeAudioData(request.response, function(buffer) {
if (!buffer) {
console.log('Error decoding file data: ' + url);
return;
}
clickingBuffer = buffer;
});
request.onerror = function() {
console.log('BufferLoader: XHR error');
};
request.send();
};
}
function playSound(buffer, time, volume) {
var source = context.createBufferSource(); // creates a sound source
source.buffer = buffer; // tell the source which sound to play
source.connect(context.destination); // connect the source to the context's destination (the speakers)
var gainNode = context.createGain(); // Create a gain node
source.connect(gainNode); // Connect the source to the gain node
gainNode.connect(context.destination); // Connect the gain node to the destination
gainNode.gain.value = volume; // Set the volume
source.start(time); // play the source at the deisred time 0=now
}
// You call with in your document ready
loadClickSound('sounds/click.wav');
//and this plays the sound
playSound(clickingBuffer, 0, 1);
Now you can play around with different timings and volume variations for example by intoducing a random factor
If you need a more complex solution with different clicking sounds (stored in a buffer array) and volume/ distance variations this would be a longer piece of code.

WebAudioApi StreamSource

I'd like to use the WebAudioApi with streams. Prelistening is very important and can't be realized when I have to wait for each audio-file to be downloaded.
Downloading the entire audio data is not intended, but the only way it can get it work at the moment:
request.open('GET', src, true);
request.responseType = 'arraybuffer';
request.onload = function() {
var audioData = request.response;
//audioData is the entire downloaded audio-file, which is required by the audioCtx anyway
audioCtx.decodeAudioData(audioData, function(buffer) {
source.buffer = buffer;
source.connect(audioCtx.destination);
source.loop = true;
source.play();
},
function(e){"Error with decoding audio data" + e.err});
}
request.send();
I found a possibility to use a stream, when requesting it from the navigator mediaDevices:
navigator.mediaDevices.getUserMedia ({audio: true, video: true})
.then(function(stream) {
var audioCtx = new AudioContext();
var source = audioCtx.createMediaStreamSource(stream);
source.play();
Is it possible to use the xhr instead of the navigator mediaDevices to get the stream:
//fetch doesn't support a range-header, which would make seeking impossible with a stream (I guess)
fetch(src).then(response => {
const reader = response.body.getReader();
//ReadableStream is not working with createMediaStreamSource
const stream = new ReadableStream({...})
var audioCtx = new AudioContext();
var source = audioCtx.createMediaStreamSource(stream);
source.play();
It doesn't work, because the ReadableStream does not work with createMediaStreamSource.
My first step is realizing the functionality of the html-audio element with seek-functionality. Is there any way to get a xhr-stream and put it into an audioContext?
The final idea is to create an single-track-audio-editor with fades, cutting, prelistening, mixing and export functionality.
EDIT:
Another atempt was to use the html audio and create a SourceNode from it:
var audio = new Audio();
audio.src = src;
var source = audioCtx.createMediaElementSource(audio);
source.connect(audioCtx.destination);
//the source doesn't contain the start method now
//the mediaElement-reference is not handled by the internal Context-Schedular
source.mediaElement.play();
The audio-element supports a stream, but cannot be handled by the context-schedular. This is important in order to create an audio editor with prelistening functionality.
It would be great to reference the standard sourceNode's buffer with the audio-element buffer, but I couldn't find out how to connect them.
I experienced this problem before and have been working on a demo solution below to stream audio in chunks with the Streams API. Seeking is not currently implemented, but it could be derived. Because bypassing decodeAudioData() is currently required, custom decoders must be provided that allow for chunk-based decoding:
https://github.com/AnthumChris/fetch-stream-audio

Synchronize audios with HTML5 and Javascript

I want to join two audios into one to synchronize with HTML5 on the client side. I've seen it with Web Audio API can do many things, but I have not been able to find how.
I have the link to two audio files (.mp3, .wav ...), what I want is to synchronize these two audio files, like a voice and a song. I do not want them together one after another, want to sync.
I would do it all on the client side using HTML5, without need to use the server. Is this possible to do?
Thank you so much for your help.
So I understand it, you have two audio files which you want to render together on the client. The web audio API can do this for you quite easily entirely in JavaScript. A good place to start is http://www.html5rocks.com/en/tutorials/webaudio/intro/
An example script would be
var context = new(window.AudioContext || window.webkitAudioContext) // Create an audio context
// Create an XML HTTP Request to collect your audio files
// https://developer.mozilla.org/en-US/docs/Web/API/XMLHttpRequest
var xhr1 = new XMLHttpRequest();
var xhr2 = new XMLHttpRequest();
var audio_buffer_1, audio_buffer_2;
xhr1.open("GET","your_url_to_audio_1");
xhr1.responseType = 'arraybuffer';
xhr1.onload = function() {
// Decode the audio data
context.decodeAudioData(request.response, function(buffer) {
audio_buffer_1 = buffer;
}, function(error){});
};
xhr2.open("GET","your_url_to_audio_2");
xhr2.responseType = 'arraybuffer';
xhr2.onload = function() {
// Decode the audio data
context.decodeAudioData(request.response, function(buffer) {
audio_buffer_2 = buffer;
}, function(error){});
};
xhr1.send();
xhr2.send();
These would load into global variables audio_buffer_1 and audio_buffer_2 the Web Audio API buffer nodes (https://webaudio.github.io/web-audio-api/#AudioBuffer) of your two files.
Now to create a new audio buffer, you would need to use offline audio context
// Assumes both buffers are of the same length. If not you need to modify the 2nd argument below
var offlineContext = new OfflineAudioContext(context.destination.channelCount,audio_buffer_1.duration*context.sampleRate , context.sampleRate);
var summing = offlineContext.createGain();
summing.connect(offlineContext.destination);
// Build the two buffer source nodes and attach their buffers
var buffer_1 = offlineContext.createBufferSource();
var buffer_2 = offlineContext.createBufferSource();
buffer_1.buffer = audio_buffer_1;
buffer_2.buffer = audio_buffer_2;
// Do something with the result by adding a callback
offlineContext.oncomplete = function(renderedBuffer){
// Place code here
};
//Begin the summing
buffer_1.start(0);
buffer_2.start(0);
offlineContext.startRendering();
Once done you will receive a new buffer inside the callback function called renderedBuffer which will be the direct summation of the two buffers.

How can I play audio elements in sync with key presses?

HTML
<textarea id="words" name="words"></textarea>
<audio id="type" src="type.mp3"></audio>
JS
document.getElementById('words').onkeydown = function(){
document.getElementById('type').play();
}
I want to make type.mp3 to play anytime I press any key.
But, it is not played in sync with the key.
I am looking for a pure JS solution.
The audio media element depends on the buffering mechanism of the browser and may not play instantly when play is called.
To play sounds in sync with key-presses you would have to use the Web Audio API instead which allows you to play a in-memory buffer and therefor instantly.
Here is an example of how you can load and trigger the sound:
window.AudioContext = window.AudioContext || window.webkitAudioContext;
var request = new XMLHttpRequest(),
url = "https://dl.dropboxusercontent.com/s/8fp1hnkwp215gfs/chirp.wav",
actx = new AudioContext(),
abuffer;
// load file via XHR
request.open("GET", url, true);
request.responseType = "arraybuffer";
request.onload = function() {
// Asynchronously decode the audio file data in request.response
actx.decodeAudioData(request.response,
function(buffer) {
if (buffer) {
abuffer = buffer; // keep a reference to decoded buffer
setup(); // setup handler
}
}
)
};
request.send();
// setup key handler
function setup() {
document.getElementById("txt").onkeydown = play;
}
// play sample - a new buffer source must be created each time
function play() {
var src = actx.createBufferSource();
src.buffer = abuffer;
src.connect(actx.destination);
src.start(0);
}
<textarea id=txt></textarea>
(note: there seem to be a bug in Firefox at the time of this writing reporting a column which does not exist in the code at the send() call - if a problem, try the code in Chrome).
Javascript is an asynchronous and event-driven language, so you can't make a synchronous function.

What is the best audio file format to play audio from javascript in custom chromecast receiver

I have developed a simple custom chromecast receiver for a game.
In it I play short sounds from the receiver javascript using:
this.bounceSound = new Audio("paddle.ogg");
to create the audio object when the game is loaded, and then using:
this.bounceSound.play();
to play the sound when needed in the game.
This works fine in chrome on my laptop, but when running the receiver in my chromecast some sounds don't play, others are delayed.
Could this be a problem with my choice of sound format (.ogg) for the audio files?
If not, what else could be the problem.
Are their any best practices on the details of the sounds files (frequency, bit depth, etc?).
Thanks
Just for the record to avoid future confusion of other developers trying to load and play back multiple short sounds at the same time:
On Chromecast, the HTML video and audio tags can only support a
single active media element at a time.
(Source: https://plus.google.com/+LeonNicholls/posts/3Fq5jcbxipJ - make sure to read the rest, it contains also important information about limitations)
Only one audio element will be loaded, others get error code 4 (that was at least the case during my debugging sessions). The correct way of loading and playing back several short sounds is using the Web Audio API as explained by Leon Nicholls in his Google+ post I linked to above.
Simple Web Audio API Wrapper
I whipped up a crude replacement for the HTMLAudioElement in JavaScript that is based on the Web Audio API:
function WebAudio(src) {
if(src) this.load(src);
}
WebAudio.prototype.audioContext = new AudioContext;
WebAudio.prototype.load = function(src) {
if(src) this.src = src;
console.log('Loading audio ' + this.src);
var self = this;
var request = new XMLHttpRequest;
request.open("GET", this.src, true);
request.responseType = "arraybuffer";
request.onload = function() {
self.audioContext.decodeAudioData(request.response, function(buffer) {
if (!buffer) {
if(self.onerror) self.onerror();
return;
}
self.buffer = buffer;
if(self.onload)
self.onload(self);
}, function(error) {
self.onerror(error);
});
};
request.send();
};
WebAudio.prototype.play = function() {
var source = this.audioContext.createBufferSource();
source.buffer = this.buffer;
source.connect(this.audioContext.destination);
source.start(0);
};
It can be used as follows:
var audio1 = new WebAudio('sounds/sound1.ogg');
audio1.onload = function() {
audio1.play();
}
var audio2 = new WebAudio('sounds/sound2.ogg');
audio2.onload = function() {
audio2.play();
}
You should download the sounds up front before you start the game. Also be aware that these sounds will then be stored in memory and Chromecast has very limited memory for that. Make sure these sounds are small and will all fit into memory.

Categories