I'm working on a side project that is a music app. This requires me to play large files. I've figured out that there are a few solutions. One is to download the music to local storage and use that whenever it is available. I can also stream music over the network. I have been unsuccessful as it relates to the latter.
My question is, how do I play from a audio from a stream?
This is the code I have so far:
fetch(audio)
.then((res) => {
const reader = res.body.getReader()
const audioObj = new Audio()
reader.read().then(function processAudio({ done, value }) {
if (done) {
console.log("You got all the data")
return
}
/// DO SOMETHING WITH VALUE
return reader.read().then(processAudio)
})
})
.catch((err) => {
console.log(`Something went wrong ${err}`)
})
I have tried looking at other APIs within the browser, eg. MediaStream
Any help is much appreciated. Thank you.
There is a WebAudio API that might be useful. It appears to give you exactly what you need.
Web Audo API
Related
I've been building a music app and today I finally got around to the point where I started trying to work playing the music into it.
As an outline of how my environment is set up, I am storing the music files as MP3s which I have uploaded into a MongoDB database using GridFS. I then use a socket.io server to download the chunks from the MongoDB database and send them as individual emits to the front end where the are processed by the Web Audio API and scheduled to play.
When they play, they are all in the correct order but there is this very tiny glitch or skip at the same spots every time (presumably between chunks) that I can't seem to get rid of. As far as I can tell, they are all scheduled right up next to each other so I can't find a reason why there should be any sort of gap or overlap between them. Any help would be appreciated. Here's the code:
Socket Route
socket.on('stream-audio', () => {
db.client.db("dev").collection('music.files').findOne({"metadata.songId": "3"}).then((result) =>{
const bucket = new GridFSBucket(db.client.db("dev"), {
bucketName: "music"
});
bucket.openDownloadStream(result._id).on('data',(chunk) => {
socket.emit('audio-chunk',chunk)
});
});
});
Front end
//These variable are declared as object variables, hence all of the "this" keywords
context: new (window.AudioContext || window.webkitAudioContext)(),
freeTime: null,
numChunks: 0,
chunkTracker: [],
...
this.socket.on('audio-chunk', (chunk) => {
//Keeping track of chunk decoding status so that they don't get scheduled out of order
const chunkId = this.numChunks
this.chunkTracker.push({
id: chunkId,
complete: false,
});
this.numChunks += 1;
//Callback to the decodeAudioData function
const decodeCallback = (buffer) => {
var shouldExecute = false;
const trackIndex = this.chunkTracker.map((e) => e.id).indexOf(chunkId);
//Checking if either it's the first chunk or the previous chunk has completed
if(trackIndex !== 0){
const prevChunk = this.chunkTracker.filter((e) => e.id === (chunkId-1))
if (prevChunk[0].complete) {
shouldExecute = true;
}
} else {
shouldExecute = true;
}
//THIS IS THE ACTUAL WEB AUDIO API STUFF
if (shouldExecute) {
if (this.freeTime === null) {
this.freeTime = this.context.currentTime
}
const source = this.context.createBufferSource();
source.buffer = buffer
source.connect(this.context.destination)
if (this.context.currentTime >= this.freeTime){
source.start()
this.freeTime = this.context.currentTime + buffer.duration
} else {
source.start(this.freeTime)
this.freeTime += buffer.duration
}
//Update the tracker of the chunks that this one is complete
this.chunkTracker[trackIndex] = {id: chunkId, complete: true}
} else {
//If the previous chunk hasn't processed yet, check again in 50ms
setTimeout((passBuffer) => {
decodeCallback(passBuffer)
},50,buffer);
}
}
decodeCallback.bind(this);
this.context.decodeAudioData(chunk,decodeCallback);
});
Any help would be appreciated, thanks!
As an outline of how my environment is set up, I am storing the music files as MP3s which I have uploaded into a MongoDB database using GridFS.
You can do this if you want, but these days we have tools like Minio, which can make this easier using more common APIs.
I then use a socket.io server to download the chunks from the MongoDB database and send them as individual emits to the front end
Don't go this route. There's no reason for the overhead of web sockets, or Socket.IO. A normal HTTP request would be fine.
where the are processed by the Web Audio API and scheduled to play.
You can't stream this way. The Web Audio API doesn't support useful streaming, unless you happened to have raw PCM chunks, which you don't.
As far as I can tell, they are all scheduled right up next to each other so I can't find a reason why there should be any sort of gap or overlap between them.
Lossy codecs aren't going to give you sample-accurate output. Especially with MP3, if you give it some arbitrary number of samples, you're going to end up with at least one full MP3 frame (~576 samples) output. The reality is that you need data ahead of the first audio frame for it to work properly. If you want to decode a stream, you need a stream to start with. You can't independently decode MP3 this way.
Fortunately, the solution also simplifies what you're doing. Simply return an HTTP stream from your server, and use an HTML audio element <audio> or new Audio(url). The browser will handle all the buffering. Just make sure your server handles range requests, and you're good to go.
I have a language site that I am working on to teach language. Users can click on objects and hear the audio for what they click on. Many of the people that will be using this are in more remote areas with slower Internet connections. Because of this, I am needing to cache audio before each of the activities is loaded otherwise there is too much of a delay.
Previously, I was having an issue where preloading would not work because iOS devices do not allow audio to load without a click event. I have gotten around this, however, I now have another issue. iOS/Safari only allows the most recent audio file to be loaded. Therefore, whenever the user clicks on another audio file (even if it was clicked on previously), it is not cached and the browser has to download it again.
So far I have not found an adequate solution to this. There are many posts from around 2011~2012 that try to deal with this but I have not found a good solution. One solution was to combine all audio clips for activity into a single audio file. That way only one audio file would be loaded into memory for each activity and then you just pick a particular part of the audio file to play. While this may work, it also becomes a nuisance whenever an audio clip needs to be changed, added, or removed.
I need something that works well in a ReactJS/Redux environment and caches properly on iOS devices.
Is there a 2020 solution that works well?
You can use IndexedDB. It's a low-level API for client-side storage of significant amounts of structured data, including files/blobs. IndexedDB API is powerful, but may seem too complicated for simple cases. If you'd prefer a simple API, try libraries such as localForage, dexie.js.
localForage is A Polyfill providing a simple name:value syntax for client-side data storage, which uses IndexedDB in the background, but falls back to WebSQL and then localStorage in browsers that don't support IndexedDB.
You can check the browser support for IndexedDB here: https://caniuse.com/#search=IndexedDB. It's well supported. Here is a simple example I made to show the concept:
index.html
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Audio</title>
</head>
<body>
<h1>Audio</h1>
<div id="container"></div>
<script src="localForage.js"></script>
<script src="main.js"></script>
</body>
</html>
main.js
"use strict";
(function() {
localforage.setItem("test", "working");
// create HTML5 audio player
function createAudioPlayer(audio) {
const audioEl = document.createElement("audio");
const audioSrc = document.createElement("source");
const container = document.getElementById("container");
audioEl.controls = true;
audioSrc.type = audio.type;
audioSrc.src = URL.createObjectURL(audio);
container.append(audioEl);
audioEl.append(audioSrc);
}
window.addEventListener("load", e => {
console.log("page loaded");
// get the audio from indexedDB
localforage.getItem("audio").then(audio => {
// it may be null if it doesn't exist
if (audio) {
console.log("audio exist");
createAudioPlayer(audio);
} else {
console.log("audio doesn't exist");
// fetch local audio file from my disk
fetch("panumoon_-_sidebyside_2.mp3")
// convert it to blob
.then(res => res.blob())
.then(audio => {
// save the blob to indexedDB
localforage
.setItem("audio", audio)
// create HTML5 audio player
.then(audio => createAudioPlayer(audio));
});
}
});
});
})();
localForage.js just includes the code from here: https://github.com/localForage/localForage/blob/master/dist/localforage.js
You can check IndexedDB in chrome dev tools and you will find our items there:
and if you refresh the page you will still see it there and you will see the audio player created as well. I hope this answered your question.
BTW, older versions of safari IOS didn't support storing blob in IndexedDB if it's still the case you can store the audio files as ArrayBuffer which is very well supported. Here is an example using ArrayBuffer:
main.js
"use strict";
(function() {
localforage.setItem("test", "working");
// convert arrayBuffer to Blob
function arrayBufferToBlob(buffer, type) {
return new Blob([buffer], { type: type });
}
// convert Blob to arrayBuffer
function blobToArrayBuffer(blob) {
return new Promise((resolve, reject) => {
const reader = new FileReader();
reader.addEventListener("loadend", e => {
resolve(reader.result);
});
reader.addEventListener("error", reject);
reader.readAsArrayBuffer(blob);
});
}
// create HTML5 audio player
function createAudioPlayer(audio) {
// if it's a buffer
if (audio.buffer) {
// convert it to blob
audio = arrayBufferToBlob(audio.buffer, audio.type);
}
const audioEl = document.createElement("audio");
const audioSrc = document.createElement("source");
const container = document.getElementById("container");
audioEl.controls = true;
audioSrc.type = audio.type;
audioSrc.src = URL.createObjectURL(audio);
container.append(audioEl);
audioEl.append(audioSrc);
}
window.addEventListener("load", e => {
console.log("page loaded");
// get the audio from indexedDB
localforage.getItem("audio").then(audio => {
// it may be null if it doesn't exist
if (audio) {
console.log("audio exist");
createAudioPlayer(audio);
} else {
console.log("audio doesn't exist");
// fetch local audio file from my disk
fetch("panumoon_-_sidebyside_2.mp3")
// convert it to blob
.then(res => res.blob())
.then(blob => {
const type = blob.type;
blobToArrayBuffer(blob).then(buffer => {
// save the buffer and type to indexedDB
// the type is needed to convet the buffer back to blob
localforage
.setItem("audio", { buffer, type })
// create HTML5 audio player
.then(audio => createAudioPlayer(audio));
});
});
}
});
});
})();
Moving my answer here from the comment.
You can use HTML5 localstorage API to store/cache the audio content. See this article from Apple https://developer.apple.com/library/archive/documentation/iPhone/Conceptual/SafariJSDatabaseGuide/Introduction/Introduction.html.
As per the article,
Make your website more responsive by caching resources—including audio
and video media—so they aren't reloaded from the web server each time
a user visits your site.
There is an example to show how to use the storage.
Apple also allows you to use a database if you need so. See this example: https://developer.apple.com/library/archive/documentation/iPhone/Conceptual/SafariJSDatabaseGuide/ASimpleExample/ASimpleExample.html#//apple_ref/doc/uid/TP40007256-CH4-SW4
Lets explore some browser storage options
localStorage is only good for storing short key/val string
IndexedDB is not ergonomic for it design
websql is deprecated/removed
Native file system is a good canditate but still experimental behind a flag in chrome
localForge is a just booiler lib for a key/value storage wrapped around IndexedDB and promises (good but unnecessary)
That leaves us with: Cache storage
/**
* Returns the cached url if it exist or fetches it,
* stores it and returns a blob
*
* #param {string|Request} url
* #returns {Promise<Blob>}
*/
async function cacheFirst (url) {
const cache = await caches.open('cache')
const res = await cache.match(file) || await fetch(url).then(res => {
cache.put(url, res.clone())
return res
})
return res.blob()
}
cacheFirst(url).then(blob => {
audioElm.src = URL.createObjectURL(blob)
})
Cache storage goes well hand in hand with service worker but can function without it. doe your site needs to be secure, as it's a "power function" and only exist in secure contexts.
Service worker is a grate addition if you want to build PWA (Progressive web app) with offline support, maybe you should consider it. something that can help you on the way is: workbox it can cache stuff on the fly as you need them - like some man in the middle. it also have a cache first strategy.
Then it can be as simple as just writing <audio src="url"> and let workbox do it thing
I am using web-share level 2 for my PWA app. Every media format is working fine except PDF. Web api is returning base64 string of PDF, At client side, I am creating blob object from it. but when I share it, Throws exception : Permission Denied
var file = new File(["/9j/4AAQSkZJRgABAQAAAQABAAD...."], 'filename.pdf', { type: 'application/pdf' });
var filesArray = [];
filesArray.push(file);
navigator['share']({files: filesArray})
.then(() => console.log('Share was successful.'))
.catch((error) => console.log('Sharing failed', error));
I don't have any clue whats going on.
For others who might encounter this problem, this was discussed on https://github.com/w3c/web-share/issues/141 and is a current limitation in Chrome tracked in https://crbug.com/1006055
I have been trying to build a a media player in react native using Expo to be able to play audio on my music project.
I have successfully hacked one together with the preferred design etc but I still have a minor bug. Here, I receive information from an API Endpoint with Links to files stored in a server. this audios play when the filename is just one word. When there are spaces in the name, the file, it does not play. eg .../musics/test.mp3 plays while .../musics/test 32.mp3 does not play. Any idea on how to handle this issue in React native will be highly appreciated. My play function
startPlay = async (index = this.index, playing = false) => {
const url = this.list[index].url;
this.index = index;
console.log(url);
// Checking if now playing music, if yes stop that
if(playing) {
await this.soundObject.stopAsync();
} else {
// Checking if item already loaded, if yes just play, else load music before play
if(this.soundObject._loaded) {
await this.soundObject.playAsync();
} else {
await this.soundObject.loadAsync(url);
await this.soundObject.playAsync();
}
}
};
url is the link to the file .
I am working on a streaming platform and I will love to get a player similar to this:
Something like this https://hackernoon.com/building-a-music-streaming-app-using-react-native-6d0878a13ba4
But I am using React native expo. All the implementations I have come across online are using native without expo. Any pointers to any already done work on this using expo will be of great help eg packages .
thanks
The urls should be encoded:
const uri = this.list[index].url;
this.index = index;
const url = encodeURI(uri);
console.log(url);
The uri = "../musics/test 32.mp3" will be encoded to url = "../musics/test%2032.mp3"
I have a video element, with data being added via MSE. I'm trying to determine how many audio channels there are in each track.
The AudioTrack objects themselves don't have a property with this information. The only way I know to go about it is to use the Web Audio API:
const v = document.querySelector('video');
const ctx = new OfflineAudioContext(32, 48000, 48000);
console.log(Array.from(v.audioTracks).map((track) => {
return ctx.createBufferSource(track.sourceBuffer).channelCount;
}));
For a video with a single mono track, I expect to get [1]. For a video with a single stereo track, I expect to get [2]. Yet, every time I get [2] no matter what the channel count is in the original source.
Questions:
Is there a proper direct way to get the number of channels in an AudioTrack?
Is there something else I could be doing with the Web Audio API to get the correct number of channels?
I stumbled upon an answer for this that seems to be working. It looks like by using decodeAudioData we can grab some buffer data about a file. I built a little function that returns a Promise with the buffer data that should return the correct number of channels of an audio file:
function loadBuffer(path) {
return fetch(path)
.then(response => response.arrayBuffer())
.then(
buffer =>
new Promise((resolve, reject) =>
audioContext.decodeAudioData(
buffer,
data => resolve(data),
err => reject(err)
)
)
)
}
Then you can use it like this:
loadBuffer(audioSource).then(data => console.log(data.numberOfChannels))
Might be best to store and reuse the data if it can be called multiple times.