Audio plays in all browsers but Safari desktop [duplicate] - javascript

I've got an angular 5 application where I've set the click handler of a button to download an audio file and play it. I'm using this code to do so:
onPreviewPressed(media: Media): void {
const url = ".....";
this.httpClient.get(url, {responseType: 'blob'}).subscribe(x => {
const fileReader = new FileReader();
fileReader.onloadend = () => {
const context = new ((<any>window).AudioContext || (<any>window).webkitAudioContext)();
const source = context.createBufferSource();
context.decodeAudioData(fileReader.result, buffer => {
source.buffer = buffer;
source.connect(context.destination);
source.start(0);
}, y => {
console.info("Error: " + y);
});
};
fileReader.readAsArrayBuffer(x);
});
}
If I go to the page in Chrome and press the button the audio starts right up. If I do it in Safari nothing happens. I know Safari locked things down but this is in response to a button click, it's not an auto-play.
The audio is sent back from the server via a PHP script, and it's sending headers like this, in case it matters:
header("Content-Type: audio/mpeg");
header('Content-Transfer-Encoding: binary');
header('Content-Length: ' . filesize($_GET['file']));
header('Cache-Control: no-cache');

No, it is not "in response to a button click".
In response to this click event, you are starting an asynchronous task. By the time you call source.start(0), your event is long dead (or at least not anymore an "trusted user gesture". So they will indeed block this call.
To circumvent this, you could simply mark your context as allowed with silence. Then, when the data will be available, you'll be able to start it with no restriction:
function markContextAsAllowed(context) {
const gain = context.createGain();
gain.gain.value = 0; // silence
const osc = context.createOscillator();
osc.connect(gain);
gain.connect(context.destination);
osc.onended = e => gain.disconnect();
osc.start(0);
osc.stop(0.01);
}
onPreviewPressed(media: Media): void {
const url = ".....";
// declare in the event handler
const context = new(window.AudioContext || window.webkitAudioContext)();
const source = context.createBufferSource();
// allow context synchronously
markContextAsAllowed(context);
this.httpClient.get(url, {
responseType: 'blob'
}).subscribe(x => {
const fileReader = new FileReader();
fileReader.onloadend = () => {
context.decodeAudioData(fileReader.result, buffer => {
source.buffer = buffer;
source.connect(context.destination);
source.start(0);
}, y => {
console.info("Error: " + y);
});
};
fileReader.readAsArrayBuffer(x);
});
}
As a fiddle since Safari doesn't like over-protected StackSnippets®
Also, my angular knowledge is very limited, but if httpClient.get does support {responseType: 'arraybuffer'} option, you could get rid of this FileReader and avoid populating twice the memory with the same data.
Finally, if you are going to play this audio more than once, consider prefetching and pre-decoding it, you'll then be able to avoid the whole asynchronous mess.

Related

Javascript MediaRecorder audio recording corrupt

I am struggeling to get record audio in the browser and make it work properly on mobile as well as desktop.
I am using MediaRecorder to start the recording and I want to send it as a file to my Flask server through a form. However, what I receive is a corrupt file, that sometimes plays on my desktop, but not on my mobile phone. I think it is connected to different mimeTypes that are supported and how the blob gets converted.
Here is the JavaScript Code:
function record_audio(){
if(state == "empty"){
navigator.mediaDevices.getUserMedia({ audio: true })
.then(stream => {
mediaRecorder = new MediaRecorder(stream);
mediaRecorder.start();
state = "recording";
document.getElementById('stop_btn').style.display = 'block'
seconds_int = setInterval(
function () {
document.getElementById("record_btn").innerHTML = seconds_rec + " s";
seconds_rec += 1;
}, 1000);
mediaRecorder.addEventListener("dataavailable", event => {
audioChunks.push(event.data);
if(mediaRecorder.state == 'inactive') makeLink();
});
}
}
function makeLink(){
const audioBlob = new Blob(audioChunks, {type: 'audio/mpeg'});
const audioUrl = URL.createObjectURL(audioBlob);
var sound = document.createElement('audio');
sound.id = 'audio-player';
sound.controls = 'controls';
sound.src = audioUrl;
console.log(audioBlob)
sound.type = 'audio/mpeg';
document.getElementById("audio-player-container").innerHTML = sound.outerHTML;
let file = new File([audioBlob], "audio.mp3",{ type:"audio/mpeg",lastModifiedDate: new Date()});
let container = new DataTransfer();
container.items.add(file);
document.getElementById("uploadedFile").files = container.files;
};
Thanks for your help!
The audio that you recorded is most likely not of type 'audio/mpeg'. No browser supports that out of the box.
If you call new MediaRecorder(stream) without the optional second argument the browser will pick the codec it likes best. You can use the mimeType property to find out which codec is used by the browser. It can for example be used to construct the Blob.
const audioBlob = new Blob(
audioChunks,
{
type: mediaRecorder.mimeType
}
);
You would also need to use it in a similar way when creating the File. And you probably also need to adapt your backend logic to handle files which aren't MP3s.

JavaScript: Use MediaRecorder to record streams from <video> but failed

I'm trying to record parts of the video from a tag, save it for later use. And I found this article: Recording a media element, which described a method by first calling stream = video.captureStream(), then use new MediaRecord(stream) to get a recorder.
I've tested on some demos, the MediaRecorder works fine if stream is from user's device (such as microphone). However, when it comes to media element, my FireFox browser throws an exception: MediaRecorder.start: The MediaStream's isolation properties disallow access from MediaRecorder.
So any idea on how to deal with it?
Browser: Firefox
The page (including the js file) is stored at local.
The src attribute of <video> tag could either be a file from local storage or a url from Internet.
Code snippets:
let chunks = [];
let getCaptureStream = function () {
let stream;
const fps = 0;
if (video.captureStream) {
console.log("use captureStream");
stream = video.captureStream(fps);
} else if (video.mozCaptureStream) {
console.log("use mozCaptureStream");
stream = video.mozCaptureStream(fps);
} else {
console.error('Stream capture is not supported');
stream = null;
}
return stream;
}
video.addEventListener('play', () => {
let stream = getCaptureStream();
const mediaRecorder = new MediaRecorder(stream);
mediaRecorder.onstop = function() {
const newVideo = document.createElement('video');
newVideo.setAttribute('controls', '');
newVideo.controls = true;
const blob = new Blob(chunks);
chunks = [];
const videoURL = window.URL.createObjectURL(blob, { 'type' : 'video/mp4; codecs="avc1.42E01E, mp4a.40.2"' });
newVideo.src = videoURL;
document.body.appendChild(video);
}
mediaRecorder.ondataavailable = function(e) {
chunks.push(e.data);
}
stopButton.onclick = function() {
mediaRecorder.stop()
}
mediaRecorder.start(); // This is the line triggers exception.
});
I found the solution myself.
When I turned to Chrome, it shows that a CORS issue limits me from even playing original video. So I guess it's because the secure strategy that preventing MediaRecorder from accessing MediaStreams. Therefore, I deployed the local files to a local server with instruction on this page.
After that, the MediaRecorder started working. Hope this will help someone in need.
But still, the official document doesn't seem to mention much about isolation properties of media elements. So any idea or further explanation is welcomed.

Splitting large file load into chunks, stitching to AudioBuffer?

In my app, I have an hour-long audio file that's entirely sound effects. Unfortunately I do need them all - they're species-specific sounds, so I can't cut any of them out. They were separate before, but I audiosprite'd them all into one large file.
The export file is about 20MB compressed, but it's still a large download for users with a slow connection. I need this file to be in an AudioBuffer, since I'm seeking to sections of an audioSprite and using loopStart/loopEnd to only loop that section. I more or less need the whole thing downloaded before playback can start, because the requested species are randomly picked when the app starts. They could be looking for sounds at the start of the file, or at the very end.
What I'm wondering is, if I were to split this file in fourths, could I load them in in parallel, and stitch them into the full AudioBuffer once loading finishes? I'm guessing I'd be merging multiple arrays, but only performing decodeAudioData() once? Requesting ~100 separate files (too many) was what brought me to audiosprites in the first place, but I'm wondering if there's a way to leverage some amount of async loading to lower the time it takes. I thought about having four <audio> elements and using createMediaElementSource() to load them, but my understanding is that I can't (?) turn a MediaElementSource into an AudioBuffer.
Consider playing the files immediately in chucks instead of waiting for the entire file to download. You could do this with the Streams API and:
Queuing chunks with the MediaSource Extensions (MSE) API and switching between buffers.
Playing back decoded PCM audio with the Web Audio API and AudioBuffer.
See examples for low-latency audio playback of file chunks as they are received.
I think in principle you can. Just download each chunk as an ArrayBuffer, concatenate all of the chunks together and send that to decodeAudioData.
But if you're on a slow link, I'm not sure how downloading in parallel will help.
Edit: this code is functional, but on occasion produces really nasty audio glitches, so I don't recommend using it without further testing. I'm leaving it here in case it helps someone else figure out working with Uint8Arrays.
So here's a basic version of it, basically what Raymond described. I haven't tested this with a split version of the large file yet, so I don't know if it improves the load speed at all, but it works. The JS is below, but if you want to test it yourself, here's the pen.
// mp3 link is from: https://codepen.io/SitePoint/pen/JRaLVR
(function () {
'use strict';
const context = new AudioContext();
let bufferList = [];
// change the urlList for your needs
const URL = 'https://s3-us-west-2.amazonaws.com/s.cdpn.io/123941/Yodel_Sound_Effect.mp3';
const urlList = [URL, URL, URL, URL, URL, URL];
const loadButton = document.querySelector('.loadFile');
const playButton = document.querySelector('.playFile');
loadButton.onclick = () => loadAllFiles(urlList, loadProgress);
function play(audioBuffer) {
const source = context.createBufferSource();
source.buffer = audioBuffer;
source.connect(context.destination);
source.start();
}
// concatenates all the buffers into one collected ArrayBuffer
function concatBufferList(buflist, len) {
let tmp = new Uint8Array(len);
let pos = 0;
for (let i = 0; i < buflist.length; i++) {
tmp.set(new Uint8Array(buflist[i]), pos);
pos += buflist[i].byteLength;
}
return tmp.buffer;
}
function loadAllFiles(list, onProgress) {
let fileCount = 0;
let fileSize = 0;
for (let i = 0; i < list.length; i++) {
loadFileXML(list[i], loadProgress, i).then(e => {
bufferList[i] = e.buf;
fileSize += e.size;
fileCount++;
if (fileCount == bufferList.length) {
let b = concatBufferList(bufferList, fileSize);
context.decodeAudioData(b).then(audioBuffer => {
playButton.disabled = false;
playButton.onclick = () => play(audioBuffer);
}).catch(error => console.log(error));
}
});
}
}
// adapted from petervdn's audiobuffer-load on npm
function loadFileXML(url, onProgress, index) {
return new Promise((resolve, reject) => {
const request = new XMLHttpRequest();
request.open('GET', url, true);
request.responseType = 'arraybuffer';
if (onProgress) {
request.onprogress = event => {
onProgress(event.loaded / event.total);
};
}
request.onload = () => {
if (request.status === 200) {
const fileSize = request.response.byteLength;
resolve({
buf: request.response,
size: fileSize
});
}
else {
reject(`Error loading '${url}' (${request.status})`);
}
};
request.onerror = error => {
reject(error);
};
request.send();
});
}
function loadProgress(e) {
console.log("Progress: "+e);
}
}());

AnalyserNode.getFloatFrequencyData always returns -Infinity

Alright, so I'm trying to determine the intensity (in dB) on samples of an audio file which is recorded by the user's browser.
I have been able to record it and play it through an HTML element.
But when I try to use this element as a source and connect it to an AnalyserNode, AnalyserNode.getFloatFrequencyData always returns an array full of -Infinity, getByteFrequencyData always returns zeroes, getByteTimeDomainData is full of 128.
Here's my code:
var audioCtx = new (window.AudioContext || window.webkitAudioContext)();
var source;
var analyser = audioCtx.createAnalyser();
var bufferLength = analyser.frequencyBinCount;
var data = new Float32Array(bufferLength);
mediaRecorder.onstop = function(e) {
var blob = new Blob(chunks, { 'type' : 'audio/ogg; codecs=opus' });
chunks = [];
var audioURL = window.URL.createObjectURL(blob);
// audio is an HTML audio element
audio.src = audioURL;
audio.addEventListener("canplaythrough", function() {
source = audioCtx.createMediaElementSource(audio);
source.connect(analyser);
analyser.connect(audioCtx.destination);
analyser.getFloatFrequencyData(data);
console.log(data);
});
}
Any idea why the AnalyserNode behaves like the source is empty/mute? I also tried to put the stream as source while recording, with the same result.
I ran into the same issue, thanks to some of your code snippets, I made it work on my end (the code bellow is typescript and will not work in the browser at the moment of writing):
audioCtx.decodeAudioData(this.result as ArrayBuffer).then(function (buffer: AudioBuffer) {
soundSource = audioCtx.createBufferSource();
soundSource.buffer = buffer;
//soundSource.connect(audioCtx.destination); //I do not need to play the sound
soundSource.connect(analyser);
soundSource.start(0);
setInterval(() => {
calc(); //In here, I will get the analyzed data with analyser.getFloatFrequencyData
}, 300); //This can be changed to 0.
// The interval helps with making sure the buffer has the data
Some explanation (I'm still a beginner when it comes to the Web Audio API, so my explanation might be wrong or incomplete):
An analyzer needs to be able to analyze a specific part of your sound file. In this case I create a AudioBufferSoundNode that contains the buffer that I got from decoding the audio data. I feed the buffer to the source, which eventually will be able to be copied inside the Analyzer. However, without the interval callback, the buffer never seems to be ready and the analysed data contains -Inifinity (which I assume is the absence of any sound, as it has nothing to read) at every index of the array. Which is why the interval is there. It analyses the data every 300ms.
Hope this helps someone!
You need to fetch the audio file and decode the audio buffer.
The url to the audio source must also be on the same domain or have have the correct CORS headers as well (as mentioned by Anthony).
Note: Replace <FILE-URI> with the path to your file in the example below.
var audioCtx = new (window.AudioContext || window.webkitAudioContext)();
var source;
var analyser = audioCtx.createAnalyser();
var button = document.querySelector('button');
var freqs;
var times;
button.addEventListener('click', (e) => {
fetch("<FILE-URI>", {
headers: new Headers({
"Content-Type" : "audio/mpeg"
})
}).then(function(response){
return response.arrayBuffer()
}).then((ab) => {
audioCtx.decodeAudioData(ab, (buffer) => {
source = audioCtx.createBufferSource();
source.connect(audioCtx.destination)
source.connect(analyser);
source.buffer = buffer;
source.start(0);
viewBufferData();
});
});
});
// Watch the changes in the audio buffer
function viewBufferData() {
setInterval(function(){
freqs = new Uint8Array(analyser.frequencyBinCount);
times = new Uint8Array(analyser.frequencyBinCount);
analyser.smoothingTimeConstant = 0.8;
analyser.fftSize = 2048;
analyser.getByteFrequencyData(freqs);
analyser.getByteTimeDomainData(times);
console.log(freqs)
console.log(times)
}, 1000)
}
If the source file from a different domain? That would fail in createMediaElementSource.

Playing a simple sound with web audio api

I've been trying to follow the steps in some tutorials for playback of a simple, encoded local wav or mp3 file with the web Audio API using a button. My code is the following (testAudioAPI.js):
window.AudioContext = window.AudioContext || window.webkitAudioContext;
var context = new AudioContext();
var myBuffer;
clickme = document.getElementById('clickme');
clickme.addEventListener('click',clickHandler);
var request = new XMLHttpRequest();
request.open('GET', 'WoodeBlock_SMan_B.wav', true);
request.responseType = 'arraybuffer';
// Decode asynchronously
request.onload = function() {
context.decodeAudioData(request.response, function(theBuffer) {
myBuffer = theBuffer;
}, onError);
}
request.send();
function playSound(buffer) {
var source = context.createBufferSource(), g = context.createGain();
source.buffer = buffer;
source.start(0);
g.gain.value = 0.5;
source.connect(g);
g.connect(context.destination);
}
function clickHandler(e) {
playSound(myBuffer);
}
And the HTML file would look like this:
<!doctype html>
<html>
<body>
<button id="clickme">Play</button>
<script src='testAudioAPI.js'></script>
</body>
</html>
However, no sound is achieved whatsoever. I've tried several snippets but I still can't figure it out. When I try to generate a sound by synthesizing it by creating an oscillator node, I do get sound, but not with buffers from local files. What would be the problem here? Thank you all.
minimalistic approach to modern ES6.
new AudioContext();
context.createBufferSource();
source.buffer = audioBuffer;
audioBuffer requests ArrayBuffer data by fetch, and then decodeAudioData decodes to AudioBuffer.
source.start()
<button id="start">playSound</button>
const audioPlay = async url => {
const context = new AudioContext();
const source = context.createBufferSource();
const audioBuffer = await fetch(url)
.then(res => res.arrayBuffer())
.then(ArrayBuffer => context.decodeAudioData(ArrayBuffer));
source.buffer = audioBuffer;
source.connect(context.destination);
source.start();
};
document.querySelector('#start').onclick = () => audioPlay('music/music.mp3');
stop play: source.stop();
The Web Audio API cannot be played automatically, you need to be triggered by an event.
Creating multiple AudioContext objects will cause an error, you should log out and then create them.
Failed to construct 'AudioContext': number of hardware contexts reached maximum
const audioPlay = (() => {
let context = null;
return async url => {
if (context) context.close();
context = new AudioContext();
const source = context.createBufferSource();
source.buffer = await fetch(url)
.then(res => res.arrayBuffer())
.then(arrayBuffer => context.decodeAudioData(arrayBuffer));
source.connect(context.destination);
source.start();
};
})();
document.querySelector('#start').onclick = () => audioPlay('music/music.mp3');
Local files, huh? Are you just grabbing them from the filesystem (e.g. "file://"), or do you have a local web server running?
From what I recall, serving directly from the filesystem violates CORS in most (all?) browsers – which will prevent your AJAX request from succeeding.
Maybe try serving the page with python -m SimpleHTTPServer or similar, and then access it at http://localhost:8000.
From what I can tell, all of your code looks fine.
You have a call to onError which isn't defined.
Apart from that it should be working, but not with Chrome because Chrome blocks all XMLHttpRequest to local files. But with Firefox for example it should work once you define (or get rid of) your call to onError.
You can create a URL reference to play a local file with two lines of code:
const sound = new URL('../assets/sound.mp3', import.meta.url)
new Audio(sound.href).play()
The href is only needed to satisfy Typescript, otherwise you can omit that suffix.

Categories