I have been trying to build a a media player in react native using Expo to be able to play audio on my music project.
I have successfully hacked one together with the preferred design etc but I still have a minor bug. Here, I receive information from an API Endpoint with Links to files stored in a server. this audios play when the filename is just one word. When there are spaces in the name, the file, it does not play. eg .../musics/test.mp3 plays while .../musics/test 32.mp3 does not play. Any idea on how to handle this issue in React native will be highly appreciated. My play function
startPlay = async (index = this.index, playing = false) => {
const url = this.list[index].url;
this.index = index;
console.log(url);
// Checking if now playing music, if yes stop that
if(playing) {
await this.soundObject.stopAsync();
} else {
// Checking if item already loaded, if yes just play, else load music before play
if(this.soundObject._loaded) {
await this.soundObject.playAsync();
} else {
await this.soundObject.loadAsync(url);
await this.soundObject.playAsync();
}
}
};
url is the link to the file .
I am working on a streaming platform and I will love to get a player similar to this:
Something like this https://hackernoon.com/building-a-music-streaming-app-using-react-native-6d0878a13ba4
But I am using React native expo. All the implementations I have come across online are using native without expo. Any pointers to any already done work on this using expo will be of great help eg packages .
thanks
The urls should be encoded:
const uri = this.list[index].url;
this.index = index;
const url = encodeURI(uri);
console.log(url);
The uri = "../musics/test 32.mp3" will be encoded to url = "../musics/test%2032.mp3"
Related
I am a trainee JavaScript and I have just recently started using TypeScript, so maybe this is a silly question but I cant figure it out. I am working on this system for answering questions via video integrating WebRTC. I have two video tags, one for the video that is being recorded and the other one for playing the already recorded video. How can I change the size of the second one? I didn't find an options parameter so I am clueless.
const playRecorded = () => {
console.log(recordedBlobs.current);
let superBuffer = new Blob(recordedBlobs.current[videoUp.id], {
type: 'video/webm;codecs=vp9,opus',
});
if (recordedVideo.current !== null) {
recordedVideo.current.src = window.URL.createObjectURL(superBuffer);
recordedVideo.current.controls = true;
(recordedVideo as any).current.play();
} else {
console.error('la cagué');
}
};
I've been building a music app and today I finally got around to the point where I started trying to work playing the music into it.
As an outline of how my environment is set up, I am storing the music files as MP3s which I have uploaded into a MongoDB database using GridFS. I then use a socket.io server to download the chunks from the MongoDB database and send them as individual emits to the front end where the are processed by the Web Audio API and scheduled to play.
When they play, they are all in the correct order but there is this very tiny glitch or skip at the same spots every time (presumably between chunks) that I can't seem to get rid of. As far as I can tell, they are all scheduled right up next to each other so I can't find a reason why there should be any sort of gap or overlap between them. Any help would be appreciated. Here's the code:
Socket Route
socket.on('stream-audio', () => {
db.client.db("dev").collection('music.files').findOne({"metadata.songId": "3"}).then((result) =>{
const bucket = new GridFSBucket(db.client.db("dev"), {
bucketName: "music"
});
bucket.openDownloadStream(result._id).on('data',(chunk) => {
socket.emit('audio-chunk',chunk)
});
});
});
Front end
//These variable are declared as object variables, hence all of the "this" keywords
context: new (window.AudioContext || window.webkitAudioContext)(),
freeTime: null,
numChunks: 0,
chunkTracker: [],
...
this.socket.on('audio-chunk', (chunk) => {
//Keeping track of chunk decoding status so that they don't get scheduled out of order
const chunkId = this.numChunks
this.chunkTracker.push({
id: chunkId,
complete: false,
});
this.numChunks += 1;
//Callback to the decodeAudioData function
const decodeCallback = (buffer) => {
var shouldExecute = false;
const trackIndex = this.chunkTracker.map((e) => e.id).indexOf(chunkId);
//Checking if either it's the first chunk or the previous chunk has completed
if(trackIndex !== 0){
const prevChunk = this.chunkTracker.filter((e) => e.id === (chunkId-1))
if (prevChunk[0].complete) {
shouldExecute = true;
}
} else {
shouldExecute = true;
}
//THIS IS THE ACTUAL WEB AUDIO API STUFF
if (shouldExecute) {
if (this.freeTime === null) {
this.freeTime = this.context.currentTime
}
const source = this.context.createBufferSource();
source.buffer = buffer
source.connect(this.context.destination)
if (this.context.currentTime >= this.freeTime){
source.start()
this.freeTime = this.context.currentTime + buffer.duration
} else {
source.start(this.freeTime)
this.freeTime += buffer.duration
}
//Update the tracker of the chunks that this one is complete
this.chunkTracker[trackIndex] = {id: chunkId, complete: true}
} else {
//If the previous chunk hasn't processed yet, check again in 50ms
setTimeout((passBuffer) => {
decodeCallback(passBuffer)
},50,buffer);
}
}
decodeCallback.bind(this);
this.context.decodeAudioData(chunk,decodeCallback);
});
Any help would be appreciated, thanks!
As an outline of how my environment is set up, I am storing the music files as MP3s which I have uploaded into a MongoDB database using GridFS.
You can do this if you want, but these days we have tools like Minio, which can make this easier using more common APIs.
I then use a socket.io server to download the chunks from the MongoDB database and send them as individual emits to the front end
Don't go this route. There's no reason for the overhead of web sockets, or Socket.IO. A normal HTTP request would be fine.
where the are processed by the Web Audio API and scheduled to play.
You can't stream this way. The Web Audio API doesn't support useful streaming, unless you happened to have raw PCM chunks, which you don't.
As far as I can tell, they are all scheduled right up next to each other so I can't find a reason why there should be any sort of gap or overlap between them.
Lossy codecs aren't going to give you sample-accurate output. Especially with MP3, if you give it some arbitrary number of samples, you're going to end up with at least one full MP3 frame (~576 samples) output. The reality is that you need data ahead of the first audio frame for it to work properly. If you want to decode a stream, you need a stream to start with. You can't independently decode MP3 this way.
Fortunately, the solution also simplifies what you're doing. Simply return an HTTP stream from your server, and use an HTML audio element <audio> or new Audio(url). The browser will handle all the buffering. Just make sure your server handles range requests, and you're good to go.
I have a language site that I am working on to teach language. Users can click on objects and hear the audio for what they click on. Many of the people that will be using this are in more remote areas with slower Internet connections. Because of this, I am needing to cache audio before each of the activities is loaded otherwise there is too much of a delay.
Previously, I was having an issue where preloading would not work because iOS devices do not allow audio to load without a click event. I have gotten around this, however, I now have another issue. iOS/Safari only allows the most recent audio file to be loaded. Therefore, whenever the user clicks on another audio file (even if it was clicked on previously), it is not cached and the browser has to download it again.
So far I have not found an adequate solution to this. There are many posts from around 2011~2012 that try to deal with this but I have not found a good solution. One solution was to combine all audio clips for activity into a single audio file. That way only one audio file would be loaded into memory for each activity and then you just pick a particular part of the audio file to play. While this may work, it also becomes a nuisance whenever an audio clip needs to be changed, added, or removed.
I need something that works well in a ReactJS/Redux environment and caches properly on iOS devices.
Is there a 2020 solution that works well?
You can use IndexedDB. It's a low-level API for client-side storage of significant amounts of structured data, including files/blobs. IndexedDB API is powerful, but may seem too complicated for simple cases. If you'd prefer a simple API, try libraries such as localForage, dexie.js.
localForage is A Polyfill providing a simple name:value syntax for client-side data storage, which uses IndexedDB in the background, but falls back to WebSQL and then localStorage in browsers that don't support IndexedDB.
You can check the browser support for IndexedDB here: https://caniuse.com/#search=IndexedDB. It's well supported. Here is a simple example I made to show the concept:
index.html
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Audio</title>
</head>
<body>
<h1>Audio</h1>
<div id="container"></div>
<script src="localForage.js"></script>
<script src="main.js"></script>
</body>
</html>
main.js
"use strict";
(function() {
localforage.setItem("test", "working");
// create HTML5 audio player
function createAudioPlayer(audio) {
const audioEl = document.createElement("audio");
const audioSrc = document.createElement("source");
const container = document.getElementById("container");
audioEl.controls = true;
audioSrc.type = audio.type;
audioSrc.src = URL.createObjectURL(audio);
container.append(audioEl);
audioEl.append(audioSrc);
}
window.addEventListener("load", e => {
console.log("page loaded");
// get the audio from indexedDB
localforage.getItem("audio").then(audio => {
// it may be null if it doesn't exist
if (audio) {
console.log("audio exist");
createAudioPlayer(audio);
} else {
console.log("audio doesn't exist");
// fetch local audio file from my disk
fetch("panumoon_-_sidebyside_2.mp3")
// convert it to blob
.then(res => res.blob())
.then(audio => {
// save the blob to indexedDB
localforage
.setItem("audio", audio)
// create HTML5 audio player
.then(audio => createAudioPlayer(audio));
});
}
});
});
})();
localForage.js just includes the code from here: https://github.com/localForage/localForage/blob/master/dist/localforage.js
You can check IndexedDB in chrome dev tools and you will find our items there:
and if you refresh the page you will still see it there and you will see the audio player created as well. I hope this answered your question.
BTW, older versions of safari IOS didn't support storing blob in IndexedDB if it's still the case you can store the audio files as ArrayBuffer which is very well supported. Here is an example using ArrayBuffer:
main.js
"use strict";
(function() {
localforage.setItem("test", "working");
// convert arrayBuffer to Blob
function arrayBufferToBlob(buffer, type) {
return new Blob([buffer], { type: type });
}
// convert Blob to arrayBuffer
function blobToArrayBuffer(blob) {
return new Promise((resolve, reject) => {
const reader = new FileReader();
reader.addEventListener("loadend", e => {
resolve(reader.result);
});
reader.addEventListener("error", reject);
reader.readAsArrayBuffer(blob);
});
}
// create HTML5 audio player
function createAudioPlayer(audio) {
// if it's a buffer
if (audio.buffer) {
// convert it to blob
audio = arrayBufferToBlob(audio.buffer, audio.type);
}
const audioEl = document.createElement("audio");
const audioSrc = document.createElement("source");
const container = document.getElementById("container");
audioEl.controls = true;
audioSrc.type = audio.type;
audioSrc.src = URL.createObjectURL(audio);
container.append(audioEl);
audioEl.append(audioSrc);
}
window.addEventListener("load", e => {
console.log("page loaded");
// get the audio from indexedDB
localforage.getItem("audio").then(audio => {
// it may be null if it doesn't exist
if (audio) {
console.log("audio exist");
createAudioPlayer(audio);
} else {
console.log("audio doesn't exist");
// fetch local audio file from my disk
fetch("panumoon_-_sidebyside_2.mp3")
// convert it to blob
.then(res => res.blob())
.then(blob => {
const type = blob.type;
blobToArrayBuffer(blob).then(buffer => {
// save the buffer and type to indexedDB
// the type is needed to convet the buffer back to blob
localforage
.setItem("audio", { buffer, type })
// create HTML5 audio player
.then(audio => createAudioPlayer(audio));
});
});
}
});
});
})();
Moving my answer here from the comment.
You can use HTML5 localstorage API to store/cache the audio content. See this article from Apple https://developer.apple.com/library/archive/documentation/iPhone/Conceptual/SafariJSDatabaseGuide/Introduction/Introduction.html.
As per the article,
Make your website more responsive by caching resources—including audio
and video media—so they aren't reloaded from the web server each time
a user visits your site.
There is an example to show how to use the storage.
Apple also allows you to use a database if you need so. See this example: https://developer.apple.com/library/archive/documentation/iPhone/Conceptual/SafariJSDatabaseGuide/ASimpleExample/ASimpleExample.html#//apple_ref/doc/uid/TP40007256-CH4-SW4
Lets explore some browser storage options
localStorage is only good for storing short key/val string
IndexedDB is not ergonomic for it design
websql is deprecated/removed
Native file system is a good canditate but still experimental behind a flag in chrome
localForge is a just booiler lib for a key/value storage wrapped around IndexedDB and promises (good but unnecessary)
That leaves us with: Cache storage
/**
* Returns the cached url if it exist or fetches it,
* stores it and returns a blob
*
* #param {string|Request} url
* #returns {Promise<Blob>}
*/
async function cacheFirst (url) {
const cache = await caches.open('cache')
const res = await cache.match(file) || await fetch(url).then(res => {
cache.put(url, res.clone())
return res
})
return res.blob()
}
cacheFirst(url).then(blob => {
audioElm.src = URL.createObjectURL(blob)
})
Cache storage goes well hand in hand with service worker but can function without it. doe your site needs to be secure, as it's a "power function" and only exist in secure contexts.
Service worker is a grate addition if you want to build PWA (Progressive web app) with offline support, maybe you should consider it. something that can help you on the way is: workbox it can cache stuff on the fly as you need them - like some man in the middle. it also have a cache first strategy.
Then it can be as simple as just writing <audio src="url"> and let workbox do it thing
I'm a web developer from japan.
This is first question on stack over flow.
I'm creating a simple music Web application now.
making a music system program is a completely beginner, so I am struggling to implement it.
As a result of various investigations, I noticed that using the Web Audio API was the best choice,
so, I decided to use it.
▼ What I want to achieve
Multiple Wav files load with the Web audio API can be grouped into one Wav file &To be able to download from the browser.
For example, load the multiple wav file like guitar, drum and piano, and
edit it on the browser, and finally output it as one Wav file.
Then we can download that edited wav file from the browser and we are able to play itunes.
▼ Question
Is it possible to achieve this requirements by just using web audio api ?
or we need to use another Library ?
I checked Record.js on github but development has stopped about 2 ~ 3 years and has many issues and I can not get support. so I decided not to use it.
and also I checked similar issue Web audio API: scheduling sounds and exporting the mix
Since the information is old, I do not know if I can still use it
thanks.
Hi and welcome to Stack Overflow!
Is it possible to achieve this just using the web audio api?
In terms of merging/mixing the files together this is perfectly achievable! This article goes through many (if not all) of the steps you will need to carry out the task you suggested.
Each file you want to upload can be loaded into an AudioBufferSource (examples explained in that article linked before) Example setting up a buffer source once the audio data has been loaded in:
play: function (data, callback) {
// create audio node and play buffer
var me = this,
source = this.context.createBufferSource(),
gainNode = this.context.createGain();
if (!source.start) { source.start = source.noteOn; }
if (!source.stop) { source.stop = source.noteOff; }
source.connect(gainNode);
gainNode.connect(this.context.destination);
source.buffer = data;
source.loop = true;
source.startTime = this.context.currentTime; // important for later!
source.start(0);
return source;
}
There are then also specific nodes already designed for your mixing purposes like the ChannelMergerNode (combines multiple mono channels into a new channel buffer). This is if you don't want to deal with the signal processing yourself in javascript but will be faster using the Web Audio objects since they are native compiled code already within the browser.
Following that complete guide sent before, there are also options to export the file (as a .wav in the demo case) using the following code :
var rate = 22050;
function exportWAV(type, before, after){
if (!before) { before = 0; }
if (!after) { after = 0; }
var channel = 0,
buffers = [];
for (channel = 0; channel < numChannels; channel++){
buffers.push(mergeBuffers(recBuffers[channel], recLength));
}
var i = 0,
offset = 0,
newbuffers = [];
for (channel = 0; channel < numChannels; channel += 1) {
offset = 0;
newbuffers[channel] = new Float32Array(before + recLength + after);
if (before > 0) {
for (i = 0; i < before; i += 1) {
newbuffers[channel].set([0], offset);
offset += 1;
}
}
newbuffers[channel].set(buffers[channel], offset);
offset += buffers[channel].length;
if (after > 0) {
for (i = 0; i < after; i += 1) {
newbuffers[channel].set([0], offset);
offset += 1;
}
}
}
if (numChannels === 2){
var interleaved = interleave(newbuffers[0], newbuffers[1]);
} else {
var interleaved = newbuffers[0];
}
var downsampledBuffer = downsampleBuffer(interleaved, rate);
var dataview = encodeWAV(downsampledBuffer, rate);
var audioBlob = new Blob([dataview], { type: type });
this.postMessage(audioBlob);
}
So I think Web-Audio has everything you could want for this purpose! However could be challenging depending on your web development experience, but its a skill definately worth learning!
Do we need to use another library?
If you can I think it's definately worth trying it with Web-Audio, as you'll almost definately get the best speeds for processing, but there are other libraries such as Pizzicato.js just to name one. I'm sure you will find plenty others.
I've been working on using the html audio tag to play some audio files. The audio plays alright, but the duration property of the audio tag is always returning infinity.
I tried the accepted answer to this question but with the same result. Tested with Chrome, IE and Firefox.
Is this a bug with the audio tag, or am I missing something?
Some of the code I'm using to play the audio files.
javascript function when playbutton is pressed
function playPlayerV2(src) {
document.getElementById("audioplayerV2").addEventListener("loadedmetadata", function (_event) {
console.log(player.duration);
});
var player = document.getElementById("audioplayer");
player.src = "source";
player.load();
player.play();
}
the audio tag in html
<audio controls="true" id="audioplayerV2" style="display: none;" preload="auto">
note: I'm hiding the standard audio player with the intend of using custom layout and make use of the player via javascript, this does not seem to be related to my problem.
try this
var getDuration = function (url, next) {
var _player = new Audio(url);
_player.addEventListener("durationchange", function (e) {
if (this.duration!=Infinity) {
var duration = this.duration
_player.remove();
next(duration);
};
}, false);
_player.load();
_player.currentTime = 24*60*60; //fake big time
_player.volume = 0;
_player.play();
//waiting...
};
getDuration ('/path/to/audio/file', function (duration) {
console.log(duration);
});
I think this is due to a chrome bug. Until it's fixed:
if (video.duration === Infinity) {
video.currentTime = 10000000;
setTimeout(() => {
video.currentTime = 0; // to reset the time, so it starts at the beginning
}, 1000);
}
let duration = video.duration;
This works for me
const audio = document.getElementById("audioplayer");
audio.addEventListener('loadedmetadata', () => {
if (audio.duration === Infinity) {
audio.currentTime = 1e101
audio.addEventListener('timeupdate', getDuration)
}
})
function getDuration() {
audio.currentTime = 0
this.voice.removeEventListener('timeupdate', getDuration)
console.log(audio.duration)
},
In case you control the server and can make it to send proper media header - this what helped the OP.
I faced this problem with files stored in Google Drive when getting them in Mobile version of Chrome. I cannot control Google Drive response and I have to somehow deal with it.
I don't have a solution that satisfies me yet, but I tried the idea from both posted answers - which basically is the same: make audio/video object to seek the real end of the resource. After Chrome finds the real end position - it gives you the duration. However the result is unsatisfying.
What this hack really makes - it forces Chrome to load the resource into the memory completely. So, if the resource is too big, or connection is too slow you end up waiting a long time for the file to be downloaded behind the scenes. And you have no control over that file - it is handled by Chrome and once it decides that it is no longer needed - it will dispose it, so the bandwidth may be spent ineficciently.
So, in case you can load the file yourself - it is better to download it (e.g. as blob) and feed it to your audio/video control.
If this is a Twilio mp3, try the .wav version. The mp3 is coming across as a stream and it fools the audio players.
To use the .wav version, just change the format of the source url from .mp3 to .wav (or leave it off, wav is the default)
Note - the wav file is 4x larger, so that's the downside to switching.
Not a direct answer but in case anyone using blobs came here, I managed to fix it using a package called webm-duration-fix
import fixWebmDuration from "webm-duration-fix";
...
fixedBlob = await fixWebmDuration(blob);
...
//If you want to modify the video file completely, you can use this package "webmFixDuration" Other methods are applied at the display level only on the video tag With this method, the complete video file is modified
webmFixDuration github example
mediaRecorder.onstop = async () => {
const duration = Date.now() - startTime;
const buggyBlob = new Blob(mediaParts, { type: 'video/webm' });
const fixedBlob = await webmFixDuration(buggyBlob, duration);
displayResult(fixedBlob);
};