WebAudioApi, frequency sound animation, android chrome v37 - javascript

I have trouble with frequency animation of sounds through web audio api on android chrome v.37
I can hear music but animation doesn't present.
A lot of experiments lead me in final to two separate ways for loading sounds and animate it.
In first way i load sound via aduio html 5 element. Then create MediaElementSource with audio element as parameter.
Connect MediaElementSource to Analyser(AudioContext.createAnalyser element).
Analyse i connect to GainNode, and finaly connect GainNode to AudioContext.destination.
Code:
var acontext = new AudioContext();
var analyser = acontext.createAnalyser();
var gainNode = acontext.createGain();
var audio = new Audio(path_to_file);
var source = acontext.createMediaElementSource(temp_audio);
source.connect(analyser);
analyser.connect(gainNode);
gainNode.connect(acontext.destination);
This schema work on PC-Chrome and newest mobile safari.
Also in FireFox.
Second way which i found have few differences.
Sounds here readed to buffer, and then connect to analyser.
code:
var acontext = new AudioContext();
var analyser = acontext.createAnalyser();
var gainNode = acontext.createGain();
var source = acontext.createBufferSource();
var request = new XMLHttpRequest();
request.open('GET', path, true);
request.responseType = 'arraybuffer';
request.addEventListener('load', function(){ source.buffer = acontext.createBuffer(request.response, false); }, false);
request.send();
source.connect(analyser);
analyser.connect(gainNode);
gainNode.connect(acontext.destination);
For draw animation i use canvas, data for draw:
analyser.fftSize = 1024;
analyser.smoothingTimeConstant = 0.92;
var dataArray = new Uint8Array(analyser.frequencyBinCount);
analyser.getByteFrequencyData(dataArray); //fill dataArray from analyser
for (var i = 0; i < analyser.frequencyBinCount; i++) {
barHeight = dataArray[i];
// and other logic here.
}
Second way work on old chromes, mobile browsers, safari.
But in android chrome v37 both way doesn't work. As i said before first way doesn't show animation, the second one just break with error - acontext.createBuffer() request 3 parameters instead of 2.
As i understand in new Web Audio Api version this method was rewritten for newest call type, with different parameters, so i don't use it.
Any advices how to force Android Chrome v.37 work here?

I found crossbrowser solution.
acontext.decodeAudioData(request.response, function (buffer) {
source.buffer = buffer
}
This way work right on all browsers. But i decline the audio tags and load sounds over XmlHTTPRequest. If you know way to get buffer from audio element for decode it in decodeAudioData - please comment how.

Related

Adding panner / spacial audio to Web Audio Context from a WebRTC stream not working

I would like to create a Web Audio panner to position the sound from a WebRTC stream.
I have the stream connecting OK and can hear the audio and see the video, but the panner does not have any effect on the audio (changing panner.setPosition(10000, 0, 0) to + or - 10000 makes no difference to the sound).
This is the onaddstream function where the audio and video get piped into a video element and where I presume i need to add the panner.
There are no errors, it just isn't panning at all.
What am I doing wrong?
Thanks!
peer_connection.onaddstream = function(event) {
var AudioContext = window.AudioContext || window.webkitAudioContext;
var audioCtx = new AudioContext();
audioCtx.listener.setOrientation(0,0,-1,0,1,0)
var panner = audioCtx.createPanner();
panner.panningModel = 'HRTF';
panner.distanceModel = 'inverse';
panner.refDistance = 1;
panner.maxDistance = 10000;
panner.rolloffFactor = 1;
panner.coneInnerAngle = 360;
panner.coneOuterAngle = 0;
panner.coneOuterGain = 0;
panner.setPosition(10000, 0, 0); //this doesn't do anything
peerInput.connect(panner);
panner.connect(audioCtx.destination);
// attach the stream to the document element
var remote_media = USE_VIDEO ? $("<video>") : $("<audio>");
remote_media.attr("autoplay", "autoplay");
if (MUTE_AUDIO_BY_DEFAULT) {
remote_media.attr("muted", "false");
}
remote_media.attr("controls", "");
peer_media_elements[peer_id] = remote_media;
$('body').append(remote_media);
attachMediaStream(remote_media[0], event.stream);
}
Try to get the event stream before setting the panner
var source = audioCtx.createMediaStreamSource(event.stream);
Reference: Mozilla Developer Network - AudioContext
CreatePaneer Refernce: Mozilla Developer Network - createPanner
3rd Party Library: wavesurfer.js
Remove all the options you've set for the panner node and see if that helps. (The cone angles seem a little funny to me, but I always forget how they work.)
If that doesn't work, create a smaller test with the panner but use a simple oscillator as the input. Play around with the parameters and positions to make sure it does what you want.
Put this back into your app. Things should work then.
Figured this out for myself.
The problems was not the code, it was because I was connected with Bluetooth audio.
Bluetooth apparently can only do stereo audio with the microphone turned off. As soon as you activate the mic, that steals one of the channels and audio output downgrades to mono.
If you have mono audio, you definitely cannot do 3D positioned sound, hence me thinking the code was not working.

WebAudioApi StreamSource

I'd like to use the WebAudioApi with streams. Prelistening is very important and can't be realized when I have to wait for each audio-file to be downloaded.
Downloading the entire audio data is not intended, but the only way it can get it work at the moment:
request.open('GET', src, true);
request.responseType = 'arraybuffer';
request.onload = function() {
var audioData = request.response;
//audioData is the entire downloaded audio-file, which is required by the audioCtx anyway
audioCtx.decodeAudioData(audioData, function(buffer) {
source.buffer = buffer;
source.connect(audioCtx.destination);
source.loop = true;
source.play();
},
function(e){"Error with decoding audio data" + e.err});
}
request.send();
I found a possibility to use a stream, when requesting it from the navigator mediaDevices:
navigator.mediaDevices.getUserMedia ({audio: true, video: true})
.then(function(stream) {
var audioCtx = new AudioContext();
var source = audioCtx.createMediaStreamSource(stream);
source.play();
Is it possible to use the xhr instead of the navigator mediaDevices to get the stream:
//fetch doesn't support a range-header, which would make seeking impossible with a stream (I guess)
fetch(src).then(response => {
const reader = response.body.getReader();
//ReadableStream is not working with createMediaStreamSource
const stream = new ReadableStream({...})
var audioCtx = new AudioContext();
var source = audioCtx.createMediaStreamSource(stream);
source.play();
It doesn't work, because the ReadableStream does not work with createMediaStreamSource.
My first step is realizing the functionality of the html-audio element with seek-functionality. Is there any way to get a xhr-stream and put it into an audioContext?
The final idea is to create an single-track-audio-editor with fades, cutting, prelistening, mixing and export functionality.
EDIT:
Another atempt was to use the html audio and create a SourceNode from it:
var audio = new Audio();
audio.src = src;
var source = audioCtx.createMediaElementSource(audio);
source.connect(audioCtx.destination);
//the source doesn't contain the start method now
//the mediaElement-reference is not handled by the internal Context-Schedular
source.mediaElement.play();
The audio-element supports a stream, but cannot be handled by the context-schedular. This is important in order to create an audio editor with prelistening functionality.
It would be great to reference the standard sourceNode's buffer with the audio-element buffer, but I couldn't find out how to connect them.
I experienced this problem before and have been working on a demo solution below to stream audio in chunks with the Streams API. Seeking is not currently implemented, but it could be derived. Because bypassing decodeAudioData() is currently required, custom decoders must be provided that allow for chunk-based decoding:
https://github.com/AnthumChris/fetch-stream-audio

Use XMLHttpRequest to load multiple audio files and append them to play in Web Audio API

I have a web that do, load 3 different audio files (each 1 second tho) with determined order and then merge then into one single Audio Buffer (one after another)
To demonstrate what I want to do, this is the sample code snippet:
var AudioContext = window.AudioContext || window.webkitAudioContext;
var audioContextApp = new AudioContext();
var buffer1 = audioContextApp.createBufferSource();
var buffer2 = audioContextApp.createBufferSource();
var buffer3 = audioContextApp.createBufferSource();
var request1 = new XMLHttpRequest;
request1.open('GET', URL_FIRST_SOUND, true);
request1.onload = function() {
var undecodedAudio = request1.response;
audioContextApp.decodeAudioData(undecodedAudio, function(buffer) {
buffer1.buffer = buffer;
});
}
request1.load()
// DO the same thing with request2, request3 to load second and third sound.
Now I don't know how to properly append those 3 audio buffers into one and let the user play the merged audio.
Well I figured out one solution myself. When I connected each buffer to the audioContext.destination, I can simply specify the time when the second audio play, which is the current time plus the duration of the first AudioBuffer.

web audio in firefox

i am trying to build a web app that visualises and and controls the source audio, it works brilliant in chrome, but completely breaks in firefox, it won't even play the audio. here is the code:
var audio = new Audio();
audio.src='track.mp3';
audio.controls = true;
audio.loop = false;
audio.autoplay = false;
window.addEventListener("load", initPlayer, false);
function initPlayer(){
$("#player").append(audio);
context = new AudioContext();
analyser = context.createAnalyser();
canvas = document.getElementById("vis");;
ctx = canvas.getContext("2d");
source = context.createMediaElementSource(audio);
source.connect(analyser);
analyser.connect(context.destination);
}
the line that breaks everything is:
source = context.createMediaElementSource(audio);
after adding this line the player just hangs at 0:00 in firefox. i have done my research and have come across CORS, but as far as i can understand this should be irrelevant as the file is kept on the same server.
Please help
You have to serve the audio correctly with a server so that MIME types are set, so run it from localhost rather than file:///..../track.mp3
We used to have a bug in Firefox where MediaElementSourceNode did not work properly in some case. It's now fixed (I believe the fix is in Aurora and Nightly, at the time of writing).
Sorry about that.

How to shift/modulate audio buffer frequency using Web Audio API

I'm experimenting with the Web Audio API and my goal is to create a digital guitar where each string has an initial sound source of an actual guitar playing the string open and then I would like to generate all other fret position sounds dynamically. After some research into the subject (this is all pretty new to me) it sounded like this might be achieved by altering the frequency of the source sound sample.
The problem is I've seen lots of algorithms for altering synthesized sin waves but nothing to alter the frequency of an audio sample. Here is a sample of my code to give a better idea of how i'm trying to implement this:
// Guitar chord buffer
var chordBuffer = null;
// Create audio context
var context = new webkitAudioContext();
// Load sound sample
var request = new XMLHttpRequest();
request.open('GET', 'chord.mp3', true);
request.responseType = 'arraybuffer';
request.onload = loadChord;
request.send();
// Handle guitar string "pluck"
$('.string').mouseenter(function(e){
e.preventDefault();
var source = context.createBufferSource();
source.buffer = chordBuffer;
// Create javaScriptNode so we can get at raw audio buffer
var jsnode = context.createJavaScriptNode(1024, 1, 1);
jsnode.onaudioprocess = changeFrequency;
// Connect nodes and play
source.connect(jsnode);
jsnode.connect(context.destination);
source.noteOn(0);
});
function loadChord() {
context.decodeAudioData(
request.response,
function(pBuffer) { chordBuffer = pBuffer; },
function(pError) { console.error(pError); }
);
}
function changeFrequency(e) {
var ib = e.inputBuffer.getChannelData(0);
var ob = e.outputBuffer.getChannelData(0);
var n = ib.length;
for (var i = 0; i < n; ++i) {
// Code needed...
}
}
So there you have it - I can play the sound just fine but am at a bit of a lose when to comes to creating the code in the changeFrequency function which would change the chord samples frequency so it sounded like another fret position on the string. Any help with this code would be appreciated or opinions on whether what I'm attempting to do is even possible.
Thanks!
playbackRate will change the pitch of the sound, but also its playback time.
If you want to change only the pitch, maybe you can use a pitch shifter. Check my javascript pitch shifter implementation here and its use with a JavascriptNode in this plugin
You can get the desired behavior by setting playbackRate, but as Brad says, you're going to have to use multi-sampling. Also see this SO question: Setting playbackRate on audio element connected to web audio api.

Categories