Synchronize audios with HTML5 and Javascript - javascript

I want to join two audios into one to synchronize with HTML5 on the client side. I've seen it with Web Audio API can do many things, but I have not been able to find how.
I have the link to two audio files (.mp3, .wav ...), what I want is to synchronize these two audio files, like a voice and a song. I do not want them together one after another, want to sync.
I would do it all on the client side using HTML5, without need to use the server. Is this possible to do?
Thank you so much for your help.

So I understand it, you have two audio files which you want to render together on the client. The web audio API can do this for you quite easily entirely in JavaScript. A good place to start is http://www.html5rocks.com/en/tutorials/webaudio/intro/
An example script would be
var context = new(window.AudioContext || window.webkitAudioContext) // Create an audio context
// Create an XML HTTP Request to collect your audio files
// https://developer.mozilla.org/en-US/docs/Web/API/XMLHttpRequest
var xhr1 = new XMLHttpRequest();
var xhr2 = new XMLHttpRequest();
var audio_buffer_1, audio_buffer_2;
xhr1.open("GET","your_url_to_audio_1");
xhr1.responseType = 'arraybuffer';
xhr1.onload = function() {
// Decode the audio data
context.decodeAudioData(request.response, function(buffer) {
audio_buffer_1 = buffer;
}, function(error){});
};
xhr2.open("GET","your_url_to_audio_2");
xhr2.responseType = 'arraybuffer';
xhr2.onload = function() {
// Decode the audio data
context.decodeAudioData(request.response, function(buffer) {
audio_buffer_2 = buffer;
}, function(error){});
};
xhr1.send();
xhr2.send();
These would load into global variables audio_buffer_1 and audio_buffer_2 the Web Audio API buffer nodes (https://webaudio.github.io/web-audio-api/#AudioBuffer) of your two files.
Now to create a new audio buffer, you would need to use offline audio context
// Assumes both buffers are of the same length. If not you need to modify the 2nd argument below
var offlineContext = new OfflineAudioContext(context.destination.channelCount,audio_buffer_1.duration*context.sampleRate , context.sampleRate);
var summing = offlineContext.createGain();
summing.connect(offlineContext.destination);
// Build the two buffer source nodes and attach their buffers
var buffer_1 = offlineContext.createBufferSource();
var buffer_2 = offlineContext.createBufferSource();
buffer_1.buffer = audio_buffer_1;
buffer_2.buffer = audio_buffer_2;
// Do something with the result by adding a callback
offlineContext.oncomplete = function(renderedBuffer){
// Place code here
};
//Begin the summing
buffer_1.start(0);
buffer_2.start(0);
offlineContext.startRendering();
Once done you will receive a new buffer inside the callback function called renderedBuffer which will be the direct summation of the two buffers.

Related

WebAudioApi StreamSource

I'd like to use the WebAudioApi with streams. Prelistening is very important and can't be realized when I have to wait for each audio-file to be downloaded.
Downloading the entire audio data is not intended, but the only way it can get it work at the moment:
request.open('GET', src, true);
request.responseType = 'arraybuffer';
request.onload = function() {
var audioData = request.response;
//audioData is the entire downloaded audio-file, which is required by the audioCtx anyway
audioCtx.decodeAudioData(audioData, function(buffer) {
source.buffer = buffer;
source.connect(audioCtx.destination);
source.loop = true;
source.play();
},
function(e){"Error with decoding audio data" + e.err});
}
request.send();
I found a possibility to use a stream, when requesting it from the navigator mediaDevices:
navigator.mediaDevices.getUserMedia ({audio: true, video: true})
.then(function(stream) {
var audioCtx = new AudioContext();
var source = audioCtx.createMediaStreamSource(stream);
source.play();
Is it possible to use the xhr instead of the navigator mediaDevices to get the stream:
//fetch doesn't support a range-header, which would make seeking impossible with a stream (I guess)
fetch(src).then(response => {
const reader = response.body.getReader();
//ReadableStream is not working with createMediaStreamSource
const stream = new ReadableStream({...})
var audioCtx = new AudioContext();
var source = audioCtx.createMediaStreamSource(stream);
source.play();
It doesn't work, because the ReadableStream does not work with createMediaStreamSource.
My first step is realizing the functionality of the html-audio element with seek-functionality. Is there any way to get a xhr-stream and put it into an audioContext?
The final idea is to create an single-track-audio-editor with fades, cutting, prelistening, mixing and export functionality.
EDIT:
Another atempt was to use the html audio and create a SourceNode from it:
var audio = new Audio();
audio.src = src;
var source = audioCtx.createMediaElementSource(audio);
source.connect(audioCtx.destination);
//the source doesn't contain the start method now
//the mediaElement-reference is not handled by the internal Context-Schedular
source.mediaElement.play();
The audio-element supports a stream, but cannot be handled by the context-schedular. This is important in order to create an audio editor with prelistening functionality.
It would be great to reference the standard sourceNode's buffer with the audio-element buffer, but I couldn't find out how to connect them.
I experienced this problem before and have been working on a demo solution below to stream audio in chunks with the Streams API. Seeking is not currently implemented, but it could be derived. Because bypassing decodeAudioData() is currently required, custom decoders must be provided that allow for chunk-based decoding:
https://github.com/AnthumChris/fetch-stream-audio

Does MediaElementSource uses less memory than BufferSource in Web Audio API?

I am making a little app that will play audio files (mp3,wav) with the ability to use an equalizer on them (say a regular Audio Player), for this I am using the Web Audio Api.
I manage to get the play part in two ways. Using decodeAudioData of BaseAudioContext
function getData() {
source = audioCtx.createBufferSource();
var request = new XMLHttpRequest();
request.open('GET', 'viper.ogg', true);
request.responseType = 'arraybuffer';
request.onload = function() {
var audioData = request.response;
audioCtx.decodeAudioData(audioData, function(buffer) {
source.buffer = buffer;
source.connect(audioCtx.destination);
source.loop = true;
},
function(e){ console.log("Error with decoding audio data" + e.err); });
}
request.send();
}
// wire up buttons to stop and play audio
play.onclick = function() {
getData();
source.start(0);
play.setAttribute('disabled', 'disabled');
}
and and much easier way with Audio() and createMediaElementSource()
let audioContainer = new Audio('assets/mp3/pink_noise.wav');
let _sourceNodes = _AudioContext.createMediaElementSource(_audioContainer);
_sourceNodes.connect(_AudioContext.destination);
_audioContainer.play();
I think the second one use less memory than createBufferSource() because createBufferSource() stores the complete audio file in memory. But I am not sure about this I really do not have to much experience with tools like Chrome Dev tools to read it correctly.
Does createMediaElementSource() use less memory than createBufferSource() ?
Edit:
Using Chrome's Task Manager seems like when using createBufferSource() just loading the file increases the Memory column something around 40000k against +/-60k with createMediaElementSource(), and the Javascript Memory 1000k vs 20k
I think you've found the answer in the task manager.
You need to be aware of a couple of things.
With a media element, you lose sample-accurate control; this may not be important to you
You need appropriate access permissions when using a MediaElementAudioSourceNode; this may not be a problem if all of your assets are on the same server

Use XMLHttpRequest to load multiple audio files and append them to play in Web Audio API

I have a web that do, load 3 different audio files (each 1 second tho) with determined order and then merge then into one single Audio Buffer (one after another)
To demonstrate what I want to do, this is the sample code snippet:
var AudioContext = window.AudioContext || window.webkitAudioContext;
var audioContextApp = new AudioContext();
var buffer1 = audioContextApp.createBufferSource();
var buffer2 = audioContextApp.createBufferSource();
var buffer3 = audioContextApp.createBufferSource();
var request1 = new XMLHttpRequest;
request1.open('GET', URL_FIRST_SOUND, true);
request1.onload = function() {
var undecodedAudio = request1.response;
audioContextApp.decodeAudioData(undecodedAudio, function(buffer) {
buffer1.buffer = buffer;
});
}
request1.load()
// DO the same thing with request2, request3 to load second and third sound.
Now I don't know how to properly append those 3 audio buffers into one and let the user play the merged audio.
Well I figured out one solution myself. When I connected each buffer to the audioContext.destination, I can simply specify the time when the second audio play, which is the current time plus the duration of the first AudioBuffer.

Javascript - Streaming Audio On The Fly (Web Audio API & XHR)

I have a simple xmlhttprequest running for fetching an audio file, when it's done fetching it decodes the audio and plays it.
var xhr = new XMLHttpRequest();
xhr.open('GET', /some url/, true);
xhr.responseType = 'arraybuffer';
xhr.onload = function() {
decode(xhr.response);
}.bind(this);
xhr.send(null);
The problem with this however, is that the file decodes only after the request is loaded/finished downloading. Is there an approach for streaming audio without having to wait for it to finish downloading?, without the usage of <audio> tags
You still need HTML5 Audio object but instead of using it directly and playing with it you can use use MediaElementAudioSourceNode along with Audio element to take advantage of Web API.
Excerpt from here
Rather than going the usual path of loading a sound directly by
issuing an XMLHttpRe quest and then decoding the buffer, you can use
the media stream audio source node (MediaElementAudioSourceNode) to
create nodes that behave much like audio source nodes
(AudioSourceNode), but wrap an existing tag. Once we have this
node connected to our audio graph, we can use our knowledge of the Web
Audio API to do great things. This small example applies a low-pass
filter to the tag:
Sample Code:
window.addEventListener('load', onLoad, false);
function onLoad() {
var audio = new Audio();
source = context.createMediaElementSource(audio);
var filter = context.createBiquadFilter();
filter.type = filter.LOWPASS;
filter.frequency.value = 440;
source.connect(this.filter);
filter.connect(context.destination);
audio.src = 'http://example.com/the.mp3';
audio.play();
}

How to shift/modulate audio buffer frequency using Web Audio API

I'm experimenting with the Web Audio API and my goal is to create a digital guitar where each string has an initial sound source of an actual guitar playing the string open and then I would like to generate all other fret position sounds dynamically. After some research into the subject (this is all pretty new to me) it sounded like this might be achieved by altering the frequency of the source sound sample.
The problem is I've seen lots of algorithms for altering synthesized sin waves but nothing to alter the frequency of an audio sample. Here is a sample of my code to give a better idea of how i'm trying to implement this:
// Guitar chord buffer
var chordBuffer = null;
// Create audio context
var context = new webkitAudioContext();
// Load sound sample
var request = new XMLHttpRequest();
request.open('GET', 'chord.mp3', true);
request.responseType = 'arraybuffer';
request.onload = loadChord;
request.send();
// Handle guitar string "pluck"
$('.string').mouseenter(function(e){
e.preventDefault();
var source = context.createBufferSource();
source.buffer = chordBuffer;
// Create javaScriptNode so we can get at raw audio buffer
var jsnode = context.createJavaScriptNode(1024, 1, 1);
jsnode.onaudioprocess = changeFrequency;
// Connect nodes and play
source.connect(jsnode);
jsnode.connect(context.destination);
source.noteOn(0);
});
function loadChord() {
context.decodeAudioData(
request.response,
function(pBuffer) { chordBuffer = pBuffer; },
function(pError) { console.error(pError); }
);
}
function changeFrequency(e) {
var ib = e.inputBuffer.getChannelData(0);
var ob = e.outputBuffer.getChannelData(0);
var n = ib.length;
for (var i = 0; i < n; ++i) {
// Code needed...
}
}
So there you have it - I can play the sound just fine but am at a bit of a lose when to comes to creating the code in the changeFrequency function which would change the chord samples frequency so it sounded like another fret position on the string. Any help with this code would be appreciated or opinions on whether what I'm attempting to do is even possible.
Thanks!
playbackRate will change the pitch of the sound, but also its playback time.
If you want to change only the pitch, maybe you can use a pitch shifter. Check my javascript pitch shifter implementation here and its use with a JavascriptNode in this plugin
You can get the desired behavior by setting playbackRate, but as Brad says, you're going to have to use multi-sampling. Also see this SO question: Setting playbackRate on audio element connected to web audio api.

Categories