web audio in firefox - javascript

i am trying to build a web app that visualises and and controls the source audio, it works brilliant in chrome, but completely breaks in firefox, it won't even play the audio. here is the code:
var audio = new Audio();
audio.src='track.mp3';
audio.controls = true;
audio.loop = false;
audio.autoplay = false;
window.addEventListener("load", initPlayer, false);
function initPlayer(){
$("#player").append(audio);
context = new AudioContext();
analyser = context.createAnalyser();
canvas = document.getElementById("vis");;
ctx = canvas.getContext("2d");
source = context.createMediaElementSource(audio);
source.connect(analyser);
analyser.connect(context.destination);
}
the line that breaks everything is:
source = context.createMediaElementSource(audio);
after adding this line the player just hangs at 0:00 in firefox. i have done my research and have come across CORS, but as far as i can understand this should be irrelevant as the file is kept on the same server.
Please help

You have to serve the audio correctly with a server so that MIME types are set, so run it from localhost rather than file:///..../track.mp3

We used to have a bug in Firefox where MediaElementSourceNode did not work properly in some case. It's now fixed (I believe the fix is in Aurora and Nightly, at the time of writing).
Sorry about that.

Related

Adding panner / spacial audio to Web Audio Context from a WebRTC stream not working

I would like to create a Web Audio panner to position the sound from a WebRTC stream.
I have the stream connecting OK and can hear the audio and see the video, but the panner does not have any effect on the audio (changing panner.setPosition(10000, 0, 0) to + or - 10000 makes no difference to the sound).
This is the onaddstream function where the audio and video get piped into a video element and where I presume i need to add the panner.
There are no errors, it just isn't panning at all.
What am I doing wrong?
Thanks!
peer_connection.onaddstream = function(event) {
var AudioContext = window.AudioContext || window.webkitAudioContext;
var audioCtx = new AudioContext();
audioCtx.listener.setOrientation(0,0,-1,0,1,0)
var panner = audioCtx.createPanner();
panner.panningModel = 'HRTF';
panner.distanceModel = 'inverse';
panner.refDistance = 1;
panner.maxDistance = 10000;
panner.rolloffFactor = 1;
panner.coneInnerAngle = 360;
panner.coneOuterAngle = 0;
panner.coneOuterGain = 0;
panner.setPosition(10000, 0, 0); //this doesn't do anything
peerInput.connect(panner);
panner.connect(audioCtx.destination);
// attach the stream to the document element
var remote_media = USE_VIDEO ? $("<video>") : $("<audio>");
remote_media.attr("autoplay", "autoplay");
if (MUTE_AUDIO_BY_DEFAULT) {
remote_media.attr("muted", "false");
}
remote_media.attr("controls", "");
peer_media_elements[peer_id] = remote_media;
$('body').append(remote_media);
attachMediaStream(remote_media[0], event.stream);
}
Try to get the event stream before setting the panner
var source = audioCtx.createMediaStreamSource(event.stream);
Reference: Mozilla Developer Network - AudioContext
CreatePaneer Refernce: Mozilla Developer Network - createPanner
3rd Party Library: wavesurfer.js
Remove all the options you've set for the panner node and see if that helps. (The cone angles seem a little funny to me, but I always forget how they work.)
If that doesn't work, create a smaller test with the panner but use a simple oscillator as the input. Play around with the parameters and positions to make sure it does what you want.
Put this back into your app. Things should work then.
Figured this out for myself.
The problems was not the code, it was because I was connected with Bluetooth audio.
Bluetooth apparently can only do stereo audio with the microphone turned off. As soon as you activate the mic, that steals one of the channels and audio output downgrades to mono.
If you have mono audio, you definitely cannot do 3D positioned sound, hence me thinking the code was not working.

WebRTC via Web Audio API silent on google Chrome

I am taking the mediaStream from WebRTC and doing some audio processing and monitoring. It works on FireFox but is silent on Chrome.
Here is a simplified version with a single gainNode as an example.
const AudioContext = window.AudioContext || window.webkitAudioContext;
let myAudioCtx = new AudioContext();
let mySource = myAudioCtx.createMediaStreamSource(stream);
let gainNode = myAudioCtx.createGain();
gainNode.gain.value = 2;
mySource.connect(gainNode);
gainNode.connect(myAudioCtx.destination);
Whereas if I instead assign stream directly to srcObject I hear the sound.
It appears that createMediaStreamSource() is not returning any audio because my monitoring shows silence. However if I assign the stream from WebRTC to srcObect as well as run though my monitoring then the monitoring detects sound.
myAudioCtx.state says 'running'
Can't think of where else to check. Any help would be appreciated
Found the solution after a good nights sleep and looking at MDN docs again.
You must assign the stream to the audio element
audio.srcObject = stream;
but you then have to mute the output so it doesn't go directly to the speakers
audio.muted = true;
this doesn't stop your web audio from working
const AudioContext = window.AudioContext || window.webkitAudioContext;
let myAudioCtx = new AudioContext();
let mySource = myAudioCtx.createMediaStreamSource(stream);
let gainNode = myAudioCtx.createGain();
gainNode.gain.value = 2;
mySource.connect(gainNode);
gainNode.connect(myAudioCtx.destination);
This works on Chrome, Safari and Firefox.

Firefox Audiocontext suspended

I am trying to record audio and upload it to server using javascript.I am using Recorder js by Matt Diamond.But the issue is file getting generated is of 0 mins. When debugged through firebug console found out the audiocontext property was suspended. When googled found out for recording the audiocontext's state should be in running state. Don't know exactly if the issue is because of state or am I missing on something. Wanted to know what causes the state of audiocontext to be in suspended mode. If I try on other browsers the state is running and file is getting generated. But my restriction is I want to use firefox for my application
Firefox version: 42.0
Below is the code
if(audioRecorder)
{
audioRecorder.clear();
audioRecorder.record();
setTimeout(stopRecorder,9000); // 9 secs
}
function stopRecorder()
{
if(audioRecorder)
{
audioRecorder.stop();
audioRecorder.exportWAV(function(blob){
alert("Blob size : "+blob.size);
// code for sending the blob to server
});
}
}
when debugged the above code in firebug audiocontext was suspended.
Thanks in advance
this is not a direct answer, but it solves the issue, taken from my other answer, if all you need is to send audio files to server, instead of using bulky uncompressed wav files, you can easily( and native-ly) record the audio in compressed ogg format using MediaRecorder API, supported in firefox since v25 and chrome since v47.
Created a JSFiddle and tested it a few times - works correctly on Firefox 42 [macosx]
https://jsfiddle.net/8unmn650/
function createDownloadLink() {
recorder && recorder.exportWAV(function(blob) {
var url = URL.createObjectURL(blob);
var li = document.createElement('li');
var au = document.createElement('audio');
var hf = document.createElement('a');
au.controls = true;
au.src = url;
hf.href = url;
hf.download = new Date().toISOString() + '.wav';
hf.innerHTML = hf.download;
li.appendChild(au);
li.appendChild(hf);
recordingslist.appendChild(li);
});
RecorderJS demo seems to be working correctly on FF 42 [macosx]
There is an open issue on recorder.js github repo regarding Firefox browser creating 0s wav files
Issue 139 Sometimes it is creating wav file of 0.0 duration on firefox #139

Connecting MediaElementAudioSourceNode to AudioContext.destination doesn't work

Here's a fiddle to show the problem. Basically, whenever the createMediaElementSource method of an AudioContext object is called, the output of the audio element is re-routed into the returned MediaElementAudioSourceNode. This is all fine and according to spec; however, when I then try to reconnect the output to the speakers (using the destination of the AudioContext), nothing happens.
Am I missing something obvious here? Maybe it has to do with cross-domain audio files? I just couldn't find any information on the topic on Google, and didn't see a note of it in the specs.
Code from the fiddle is:
var a = new Audio();
a.src = "http://webaudioapi.com/samples/audio-tag/chrono.mp3";
a.controls = true;
a.loop = true;
a.autoplay = true;
document.body.appendChild(a);
var ctx = new AudioContext();
// PROBLEM HERE
var shouldBreak = true;
var src;
if (shouldBreak) {
// this one stops playback
// it should redirect output from audio element to the MediaElementAudioSourceNode
// but src.connect(ctx.destination) does not fix it
src = ctx.createMediaElementSource(a);
src.connect(ctx.destination);
}
Yes, the Web Audio API requires that the audio adhere to the Same-Origin Policy. If the audio you're attempting to play is not from the same origin then the appropriate Access-Control headers are required. The resource in your example does not have the required headers.

WebAudioApi, frequency sound animation, android chrome v37

I have trouble with frequency animation of sounds through web audio api on android chrome v.37
I can hear music but animation doesn't present.
A lot of experiments lead me in final to two separate ways for loading sounds and animate it.
In first way i load sound via aduio html 5 element. Then create MediaElementSource with audio element as parameter.
Connect MediaElementSource to Analyser(AudioContext.createAnalyser element).
Analyse i connect to GainNode, and finaly connect GainNode to AudioContext.destination.
Code:
var acontext = new AudioContext();
var analyser = acontext.createAnalyser();
var gainNode = acontext.createGain();
var audio = new Audio(path_to_file);
var source = acontext.createMediaElementSource(temp_audio);
source.connect(analyser);
analyser.connect(gainNode);
gainNode.connect(acontext.destination);
This schema work on PC-Chrome and newest mobile safari.
Also in FireFox.
Second way which i found have few differences.
Sounds here readed to buffer, and then connect to analyser.
code:
var acontext = new AudioContext();
var analyser = acontext.createAnalyser();
var gainNode = acontext.createGain();
var source = acontext.createBufferSource();
var request = new XMLHttpRequest();
request.open('GET', path, true);
request.responseType = 'arraybuffer';
request.addEventListener('load', function(){ source.buffer = acontext.createBuffer(request.response, false); }, false);
request.send();
source.connect(analyser);
analyser.connect(gainNode);
gainNode.connect(acontext.destination);
For draw animation i use canvas, data for draw:
analyser.fftSize = 1024;
analyser.smoothingTimeConstant = 0.92;
var dataArray = new Uint8Array(analyser.frequencyBinCount);
analyser.getByteFrequencyData(dataArray); //fill dataArray from analyser
for (var i = 0; i < analyser.frequencyBinCount; i++) {
barHeight = dataArray[i];
// and other logic here.
}
Second way work on old chromes, mobile browsers, safari.
But in android chrome v37 both way doesn't work. As i said before first way doesn't show animation, the second one just break with error - acontext.createBuffer() request 3 parameters instead of 2.
As i understand in new Web Audio Api version this method was rewritten for newest call type, with different parameters, so i don't use it.
Any advices how to force Android Chrome v.37 work here?
I found crossbrowser solution.
acontext.decodeAudioData(request.response, function (buffer) {
source.buffer = buffer
}
This way work right on all browsers. But i decline the audio tags and load sounds over XmlHTTPRequest. If you know way to get buffer from audio element for decode it in decodeAudioData - please comment how.

Categories