WebAudioApi StreamSource - javascript

I'd like to use the WebAudioApi with streams. Prelistening is very important and can't be realized when I have to wait for each audio-file to be downloaded.
Downloading the entire audio data is not intended, but the only way it can get it work at the moment:
request.open('GET', src, true);
request.responseType = 'arraybuffer';
request.onload = function() {
var audioData = request.response;
//audioData is the entire downloaded audio-file, which is required by the audioCtx anyway
audioCtx.decodeAudioData(audioData, function(buffer) {
source.buffer = buffer;
source.connect(audioCtx.destination);
source.loop = true;
source.play();
},
function(e){"Error with decoding audio data" + e.err});
}
request.send();
I found a possibility to use a stream, when requesting it from the navigator mediaDevices:
navigator.mediaDevices.getUserMedia ({audio: true, video: true})
.then(function(stream) {
var audioCtx = new AudioContext();
var source = audioCtx.createMediaStreamSource(stream);
source.play();
Is it possible to use the xhr instead of the navigator mediaDevices to get the stream:
//fetch doesn't support a range-header, which would make seeking impossible with a stream (I guess)
fetch(src).then(response => {
const reader = response.body.getReader();
//ReadableStream is not working with createMediaStreamSource
const stream = new ReadableStream({...})
var audioCtx = new AudioContext();
var source = audioCtx.createMediaStreamSource(stream);
source.play();
It doesn't work, because the ReadableStream does not work with createMediaStreamSource.
My first step is realizing the functionality of the html-audio element with seek-functionality. Is there any way to get a xhr-stream and put it into an audioContext?
The final idea is to create an single-track-audio-editor with fades, cutting, prelistening, mixing and export functionality.
EDIT:
Another atempt was to use the html audio and create a SourceNode from it:
var audio = new Audio();
audio.src = src;
var source = audioCtx.createMediaElementSource(audio);
source.connect(audioCtx.destination);
//the source doesn't contain the start method now
//the mediaElement-reference is not handled by the internal Context-Schedular
source.mediaElement.play();
The audio-element supports a stream, but cannot be handled by the context-schedular. This is important in order to create an audio editor with prelistening functionality.
It would be great to reference the standard sourceNode's buffer with the audio-element buffer, but I couldn't find out how to connect them.

I experienced this problem before and have been working on a demo solution below to stream audio in chunks with the Streams API. Seeking is not currently implemented, but it could be derived. Because bypassing decodeAudioData() is currently required, custom decoders must be provided that allow for chunk-based decoding:
https://github.com/AnthumChris/fetch-stream-audio

Related

Does MediaElementSource uses less memory than BufferSource in Web Audio API?

I am making a little app that will play audio files (mp3,wav) with the ability to use an equalizer on them (say a regular Audio Player), for this I am using the Web Audio Api.
I manage to get the play part in two ways. Using decodeAudioData of BaseAudioContext
function getData() {
source = audioCtx.createBufferSource();
var request = new XMLHttpRequest();
request.open('GET', 'viper.ogg', true);
request.responseType = 'arraybuffer';
request.onload = function() {
var audioData = request.response;
audioCtx.decodeAudioData(audioData, function(buffer) {
source.buffer = buffer;
source.connect(audioCtx.destination);
source.loop = true;
},
function(e){ console.log("Error with decoding audio data" + e.err); });
}
request.send();
}
// wire up buttons to stop and play audio
play.onclick = function() {
getData();
source.start(0);
play.setAttribute('disabled', 'disabled');
}
and and much easier way with Audio() and createMediaElementSource()
let audioContainer = new Audio('assets/mp3/pink_noise.wav');
let _sourceNodes = _AudioContext.createMediaElementSource(_audioContainer);
_sourceNodes.connect(_AudioContext.destination);
_audioContainer.play();
I think the second one use less memory than createBufferSource() because createBufferSource() stores the complete audio file in memory. But I am not sure about this I really do not have to much experience with tools like Chrome Dev tools to read it correctly.
Does createMediaElementSource() use less memory than createBufferSource() ?
Edit:
Using Chrome's Task Manager seems like when using createBufferSource() just loading the file increases the Memory column something around 40000k against +/-60k with createMediaElementSource(), and the Javascript Memory 1000k vs 20k
I think you've found the answer in the task manager.
You need to be aware of a couple of things.
With a media element, you lose sample-accurate control; this may not be important to you
You need appropriate access permissions when using a MediaElementAudioSourceNode; this may not be a problem if all of your assets are on the same server

Javascript - Streaming Audio On The Fly (Web Audio API & XHR)

I have a simple xmlhttprequest running for fetching an audio file, when it's done fetching it decodes the audio and plays it.
var xhr = new XMLHttpRequest();
xhr.open('GET', /some url/, true);
xhr.responseType = 'arraybuffer';
xhr.onload = function() {
decode(xhr.response);
}.bind(this);
xhr.send(null);
The problem with this however, is that the file decodes only after the request is loaded/finished downloading. Is there an approach for streaming audio without having to wait for it to finish downloading?, without the usage of <audio> tags
You still need HTML5 Audio object but instead of using it directly and playing with it you can use use MediaElementAudioSourceNode along with Audio element to take advantage of Web API.
Excerpt from here
Rather than going the usual path of loading a sound directly by
issuing an XMLHttpRe quest and then decoding the buffer, you can use
the media stream audio source node (MediaElementAudioSourceNode) to
create nodes that behave much like audio source nodes
(AudioSourceNode), but wrap an existing tag. Once we have this
node connected to our audio graph, we can use our knowledge of the Web
Audio API to do great things. This small example applies a low-pass
filter to the tag:
Sample Code:
window.addEventListener('load', onLoad, false);
function onLoad() {
var audio = new Audio();
source = context.createMediaElementSource(audio);
var filter = context.createBiquadFilter();
filter.type = filter.LOWPASS;
filter.frequency.value = 440;
source.connect(this.filter);
filter.connect(context.destination);
audio.src = 'http://example.com/the.mp3';
audio.play();
}

Synchronize audios with HTML5 and Javascript

I want to join two audios into one to synchronize with HTML5 on the client side. I've seen it with Web Audio API can do many things, but I have not been able to find how.
I have the link to two audio files (.mp3, .wav ...), what I want is to synchronize these two audio files, like a voice and a song. I do not want them together one after another, want to sync.
I would do it all on the client side using HTML5, without need to use the server. Is this possible to do?
Thank you so much for your help.
So I understand it, you have two audio files which you want to render together on the client. The web audio API can do this for you quite easily entirely in JavaScript. A good place to start is http://www.html5rocks.com/en/tutorials/webaudio/intro/
An example script would be
var context = new(window.AudioContext || window.webkitAudioContext) // Create an audio context
// Create an XML HTTP Request to collect your audio files
// https://developer.mozilla.org/en-US/docs/Web/API/XMLHttpRequest
var xhr1 = new XMLHttpRequest();
var xhr2 = new XMLHttpRequest();
var audio_buffer_1, audio_buffer_2;
xhr1.open("GET","your_url_to_audio_1");
xhr1.responseType = 'arraybuffer';
xhr1.onload = function() {
// Decode the audio data
context.decodeAudioData(request.response, function(buffer) {
audio_buffer_1 = buffer;
}, function(error){});
};
xhr2.open("GET","your_url_to_audio_2");
xhr2.responseType = 'arraybuffer';
xhr2.onload = function() {
// Decode the audio data
context.decodeAudioData(request.response, function(buffer) {
audio_buffer_2 = buffer;
}, function(error){});
};
xhr1.send();
xhr2.send();
These would load into global variables audio_buffer_1 and audio_buffer_2 the Web Audio API buffer nodes (https://webaudio.github.io/web-audio-api/#AudioBuffer) of your two files.
Now to create a new audio buffer, you would need to use offline audio context
// Assumes both buffers are of the same length. If not you need to modify the 2nd argument below
var offlineContext = new OfflineAudioContext(context.destination.channelCount,audio_buffer_1.duration*context.sampleRate , context.sampleRate);
var summing = offlineContext.createGain();
summing.connect(offlineContext.destination);
// Build the two buffer source nodes and attach their buffers
var buffer_1 = offlineContext.createBufferSource();
var buffer_2 = offlineContext.createBufferSource();
buffer_1.buffer = audio_buffer_1;
buffer_2.buffer = audio_buffer_2;
// Do something with the result by adding a callback
offlineContext.oncomplete = function(renderedBuffer){
// Place code here
};
//Begin the summing
buffer_1.start(0);
buffer_2.start(0);
offlineContext.startRendering();
Once done you will receive a new buffer inside the callback function called renderedBuffer which will be the direct summation of the two buffers.

How can I play audio elements in sync with key presses?

HTML
<textarea id="words" name="words"></textarea>
<audio id="type" src="type.mp3"></audio>
JS
document.getElementById('words').onkeydown = function(){
document.getElementById('type').play();
}
I want to make type.mp3 to play anytime I press any key.
But, it is not played in sync with the key.
I am looking for a pure JS solution.
The audio media element depends on the buffering mechanism of the browser and may not play instantly when play is called.
To play sounds in sync with key-presses you would have to use the Web Audio API instead which allows you to play a in-memory buffer and therefor instantly.
Here is an example of how you can load and trigger the sound:
window.AudioContext = window.AudioContext || window.webkitAudioContext;
var request = new XMLHttpRequest(),
url = "https://dl.dropboxusercontent.com/s/8fp1hnkwp215gfs/chirp.wav",
actx = new AudioContext(),
abuffer;
// load file via XHR
request.open("GET", url, true);
request.responseType = "arraybuffer";
request.onload = function() {
// Asynchronously decode the audio file data in request.response
actx.decodeAudioData(request.response,
function(buffer) {
if (buffer) {
abuffer = buffer; // keep a reference to decoded buffer
setup(); // setup handler
}
}
)
};
request.send();
// setup key handler
function setup() {
document.getElementById("txt").onkeydown = play;
}
// play sample - a new buffer source must be created each time
function play() {
var src = actx.createBufferSource();
src.buffer = abuffer;
src.connect(actx.destination);
src.start(0);
}
<textarea id=txt></textarea>
(note: there seem to be a bug in Firefox at the time of this writing reporting a column which does not exist in the code at the send() call - if a problem, try the code in Chrome).
Javascript is an asynchronous and event-driven language, so you can't make a synchronous function.

WebAudioApi, frequency sound animation, android chrome v37

I have trouble with frequency animation of sounds through web audio api on android chrome v.37
I can hear music but animation doesn't present.
A lot of experiments lead me in final to two separate ways for loading sounds and animate it.
In first way i load sound via aduio html 5 element. Then create MediaElementSource with audio element as parameter.
Connect MediaElementSource to Analyser(AudioContext.createAnalyser element).
Analyse i connect to GainNode, and finaly connect GainNode to AudioContext.destination.
Code:
var acontext = new AudioContext();
var analyser = acontext.createAnalyser();
var gainNode = acontext.createGain();
var audio = new Audio(path_to_file);
var source = acontext.createMediaElementSource(temp_audio);
source.connect(analyser);
analyser.connect(gainNode);
gainNode.connect(acontext.destination);
This schema work on PC-Chrome and newest mobile safari.
Also in FireFox.
Second way which i found have few differences.
Sounds here readed to buffer, and then connect to analyser.
code:
var acontext = new AudioContext();
var analyser = acontext.createAnalyser();
var gainNode = acontext.createGain();
var source = acontext.createBufferSource();
var request = new XMLHttpRequest();
request.open('GET', path, true);
request.responseType = 'arraybuffer';
request.addEventListener('load', function(){ source.buffer = acontext.createBuffer(request.response, false); }, false);
request.send();
source.connect(analyser);
analyser.connect(gainNode);
gainNode.connect(acontext.destination);
For draw animation i use canvas, data for draw:
analyser.fftSize = 1024;
analyser.smoothingTimeConstant = 0.92;
var dataArray = new Uint8Array(analyser.frequencyBinCount);
analyser.getByteFrequencyData(dataArray); //fill dataArray from analyser
for (var i = 0; i < analyser.frequencyBinCount; i++) {
barHeight = dataArray[i];
// and other logic here.
}
Second way work on old chromes, mobile browsers, safari.
But in android chrome v37 both way doesn't work. As i said before first way doesn't show animation, the second one just break with error - acontext.createBuffer() request 3 parameters instead of 2.
As i understand in new Web Audio Api version this method was rewritten for newest call type, with different parameters, so i don't use it.
Any advices how to force Android Chrome v.37 work here?
I found crossbrowser solution.
acontext.decodeAudioData(request.response, function (buffer) {
source.buffer = buffer
}
This way work right on all browsers. But i decline the audio tags and load sounds over XmlHTTPRequest. If you know way to get buffer from audio element for decode it in decodeAudioData - please comment how.

Categories