Disclaimer: I am familiar with web technologies but still a newbie.
Scenario: I want user to choose an audio file from local file system. Then I wish to show a small audio control on the webpage to play the selection and send the audio back to the server (after clicking a button).
Problem: Using MatBlazor FileUpload, I am able to get a stream to the local audio file, but I am at a loss on how to use it with the html audio element. Specifically, how can I pass on the audio stream to src in the element?
One clear way to do this in javascript is to use <input type="file"/> element and then use the Filereader() to play the audio. Something like this shown here: Using Filereader.readAsDataURL(), Using URL.createObjectURL()
How can I do this in Blazor, ie play local audio file in browser using stream, the right way?
Current Workaround: For now, I am reading the stream and converting the audio to base64 string and then passing it on to audio element.
The downside of this approach is that for a large audio file of about 18 MB the conversion time is ~30 seconds, and the UI is stuck till then. If I use the javascript way with Interops, then the load time is almost instantaneous, but then I have to use the input element and not the MatFileUpload component. Another reason to have local file stream is because I want to send this audio file to server for further processing and have found it could be done easily using streams.
The code I am using to convert the audio stream to base64 string:
async Task FilesReadyForContent(IMatFileUploadEntry[] files)
{
string base64Audio; // variable defined outside the function to update DOM
bool loadingAudio; // defined outside
try
{
file = files.FirstOrDefault();
if (file == null)
{
base64Audio = "Error! Could not load file";
}
else
{
using (MemoryStream ms = new MemoryStream())
{
loadingAudio = true;
await InvokeAsync(() => this.StateHasChanged());
await file.WriteToStreamAsync(ms);
base64Audio = System.Convert.ToBase64String(ms.ToArray());
loadingAudio = false;
await InvokeAsync(() => this.StateHasChanged());
}
}
}
catch (Exception ex)
{
base64Audio = $"Error! Exception:\r\n{ex.Message}\r\n{ex.StackTrace}";
}
finally
{
await InvokeAsync(() => { this.StateHasChanged(); });
}
}
Related
I am trying to record and upload audio from javascript. I can successfullly record audio Blobs from a MediaRecorder. My understanding is that after recording several chunks into blobs, I would concatenate them as a new Blob(audioBlobs) and upload that. Unfortunately, the result on the server-side keeps being more or less gibberish. I'm currently running a localhost connection, so converting to uncompressed WAV isn't a problem (might be come one later, but that's a separate issue). Here is what I have so far
navigator.mediaDevices.getUserMedia({audio: true, video: false})
.then(stream => {
const mediaRecorder = new MediaRecorder(stream);
mediaRecorder.start(1000);
const audioChunks = [];
mediaRecorder.addEventListener("dataavailable", event => {
audioChunks.push(event.data);
});
function sendData () {
const audioBlob = new Blob(audioChunks);
session.call('my.app.method', [XXXXXX see below XXXXXX])
}
})
The session object here is an autobahn.js websockets connection to a python server (using soundfile. I tried a number of arguments in the place that was labelled by XXXXX in the code.
Just pass the audioBlob. In that case, the python side just receives an empty dictionary.
Pass audioBlob.text() in that case, I get something that looks somewhat binary (starts with OggS), but it can't be decoded.
Pass audioBlob.arrayBuffer(). In that case the python side receives an empty dictionary.
A possible solution could be to convert the data to WAV on the serverside (just changing the mime-type on the blob doesn't work) or to find a way to interpret the .text() output on the server side.
The solution was to use recorder.js and then use the getBuffer method in there to get the wave data as a Float32Array.
I have a language site that I am working on to teach language. Users can click on objects and hear the audio for what they click on. Many of the people that will be using this are in more remote areas with slower Internet connections. Because of this, I am needing to cache audio before each of the activities is loaded otherwise there is too much of a delay.
Previously, I was having an issue where preloading would not work because iOS devices do not allow audio to load without a click event. I have gotten around this, however, I now have another issue. iOS/Safari only allows the most recent audio file to be loaded. Therefore, whenever the user clicks on another audio file (even if it was clicked on previously), it is not cached and the browser has to download it again.
So far I have not found an adequate solution to this. There are many posts from around 2011~2012 that try to deal with this but I have not found a good solution. One solution was to combine all audio clips for activity into a single audio file. That way only one audio file would be loaded into memory for each activity and then you just pick a particular part of the audio file to play. While this may work, it also becomes a nuisance whenever an audio clip needs to be changed, added, or removed.
I need something that works well in a ReactJS/Redux environment and caches properly on iOS devices.
Is there a 2020 solution that works well?
You can use IndexedDB. It's a low-level API for client-side storage of significant amounts of structured data, including files/blobs. IndexedDB API is powerful, but may seem too complicated for simple cases. If you'd prefer a simple API, try libraries such as localForage, dexie.js.
localForage is A Polyfill providing a simple name:value syntax for client-side data storage, which uses IndexedDB in the background, but falls back to WebSQL and then localStorage in browsers that don't support IndexedDB.
You can check the browser support for IndexedDB here: https://caniuse.com/#search=IndexedDB. It's well supported. Here is a simple example I made to show the concept:
index.html
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Audio</title>
</head>
<body>
<h1>Audio</h1>
<div id="container"></div>
<script src="localForage.js"></script>
<script src="main.js"></script>
</body>
</html>
main.js
"use strict";
(function() {
localforage.setItem("test", "working");
// create HTML5 audio player
function createAudioPlayer(audio) {
const audioEl = document.createElement("audio");
const audioSrc = document.createElement("source");
const container = document.getElementById("container");
audioEl.controls = true;
audioSrc.type = audio.type;
audioSrc.src = URL.createObjectURL(audio);
container.append(audioEl);
audioEl.append(audioSrc);
}
window.addEventListener("load", e => {
console.log("page loaded");
// get the audio from indexedDB
localforage.getItem("audio").then(audio => {
// it may be null if it doesn't exist
if (audio) {
console.log("audio exist");
createAudioPlayer(audio);
} else {
console.log("audio doesn't exist");
// fetch local audio file from my disk
fetch("panumoon_-_sidebyside_2.mp3")
// convert it to blob
.then(res => res.blob())
.then(audio => {
// save the blob to indexedDB
localforage
.setItem("audio", audio)
// create HTML5 audio player
.then(audio => createAudioPlayer(audio));
});
}
});
});
})();
localForage.js just includes the code from here: https://github.com/localForage/localForage/blob/master/dist/localforage.js
You can check IndexedDB in chrome dev tools and you will find our items there:
and if you refresh the page you will still see it there and you will see the audio player created as well. I hope this answered your question.
BTW, older versions of safari IOS didn't support storing blob in IndexedDB if it's still the case you can store the audio files as ArrayBuffer which is very well supported. Here is an example using ArrayBuffer:
main.js
"use strict";
(function() {
localforage.setItem("test", "working");
// convert arrayBuffer to Blob
function arrayBufferToBlob(buffer, type) {
return new Blob([buffer], { type: type });
}
// convert Blob to arrayBuffer
function blobToArrayBuffer(blob) {
return new Promise((resolve, reject) => {
const reader = new FileReader();
reader.addEventListener("loadend", e => {
resolve(reader.result);
});
reader.addEventListener("error", reject);
reader.readAsArrayBuffer(blob);
});
}
// create HTML5 audio player
function createAudioPlayer(audio) {
// if it's a buffer
if (audio.buffer) {
// convert it to blob
audio = arrayBufferToBlob(audio.buffer, audio.type);
}
const audioEl = document.createElement("audio");
const audioSrc = document.createElement("source");
const container = document.getElementById("container");
audioEl.controls = true;
audioSrc.type = audio.type;
audioSrc.src = URL.createObjectURL(audio);
container.append(audioEl);
audioEl.append(audioSrc);
}
window.addEventListener("load", e => {
console.log("page loaded");
// get the audio from indexedDB
localforage.getItem("audio").then(audio => {
// it may be null if it doesn't exist
if (audio) {
console.log("audio exist");
createAudioPlayer(audio);
} else {
console.log("audio doesn't exist");
// fetch local audio file from my disk
fetch("panumoon_-_sidebyside_2.mp3")
// convert it to blob
.then(res => res.blob())
.then(blob => {
const type = blob.type;
blobToArrayBuffer(blob).then(buffer => {
// save the buffer and type to indexedDB
// the type is needed to convet the buffer back to blob
localforage
.setItem("audio", { buffer, type })
// create HTML5 audio player
.then(audio => createAudioPlayer(audio));
});
});
}
});
});
})();
Moving my answer here from the comment.
You can use HTML5 localstorage API to store/cache the audio content. See this article from Apple https://developer.apple.com/library/archive/documentation/iPhone/Conceptual/SafariJSDatabaseGuide/Introduction/Introduction.html.
As per the article,
Make your website more responsive by caching resources—including audio
and video media—so they aren't reloaded from the web server each time
a user visits your site.
There is an example to show how to use the storage.
Apple also allows you to use a database if you need so. See this example: https://developer.apple.com/library/archive/documentation/iPhone/Conceptual/SafariJSDatabaseGuide/ASimpleExample/ASimpleExample.html#//apple_ref/doc/uid/TP40007256-CH4-SW4
Lets explore some browser storage options
localStorage is only good for storing short key/val string
IndexedDB is not ergonomic for it design
websql is deprecated/removed
Native file system is a good canditate but still experimental behind a flag in chrome
localForge is a just booiler lib for a key/value storage wrapped around IndexedDB and promises (good but unnecessary)
That leaves us with: Cache storage
/**
* Returns the cached url if it exist or fetches it,
* stores it and returns a blob
*
* #param {string|Request} url
* #returns {Promise<Blob>}
*/
async function cacheFirst (url) {
const cache = await caches.open('cache')
const res = await cache.match(file) || await fetch(url).then(res => {
cache.put(url, res.clone())
return res
})
return res.blob()
}
cacheFirst(url).then(blob => {
audioElm.src = URL.createObjectURL(blob)
})
Cache storage goes well hand in hand with service worker but can function without it. doe your site needs to be secure, as it's a "power function" and only exist in secure contexts.
Service worker is a grate addition if you want to build PWA (Progressive web app) with offline support, maybe you should consider it. something that can help you on the way is: workbox it can cache stuff on the fly as you need them - like some man in the middle. it also have a cache first strategy.
Then it can be as simple as just writing <audio src="url"> and let workbox do it thing
Encountered the following problem:
From the client side using websocket I drive the video bit by bit to the server:
mediaRecorder.ondataavailable = function (e) {
if (ws && ws.readyState === WebSocket.OPEN && e.data && e.data.size > 0) {
ws.send(e.data);
}
else {
//...
}
}
mediaRecorder.start(100);
On the server in a loop, I get these parts and write them to a file.
while (true)
{
buffer = new ArraySegment<byte>(new byte[500000]);
result = await socket.ReceiveAsync(buffer, CancellationToken.None);
if (socket.State == WebSocketState.CloseReceived)
{
bw.Close();
fs.Close();
break;
}
if (Encoding.UTF8.GetString(buffer.Array, 0, result.Count) == "start record")
{
writeStatus = "start record";
fullPath = $"{context.Server.MapPath("~")}{fileName}.{fileExt}";
fs = new FileStream(fullPath, FileMode.Append, FileAccess.Write);
bw = new BinaryWriter(fs);
continue;
}
if (Encoding.UTF8.GetString(buffer.Array, 0, result.Count) == "stop record")
{
bw.Close();
fs.Close();
writeStatus = "none";
continue;
}
if (writeStatus == "start record")
{
bw.Write(buffer.Array, 0, result.Count);
pos += result.Count;
}
}
The problem is that I need to keep writing videos to the same file every time the page is reloaded. As I understand it, just adding to the end of the file is not an option, since in this case all the bytes added are not played when playing the recorded video file. Tell me, please, how can I continue recording video in the same file?
It is probably not possible, without going through a "video joining" process. I assume that the start of the stream from the browser includes a header that needs to be at the start of the file.
There are no simple algorithms or built-in .Net Framework tools that can help. It's best you try to use an external library or tool such as FFmpeg see Joining Multiple .wmv Files with C#
Video files are "container" formats that combine video and audio streams. You will find that multiple recording sessions are simply multiple "video file containers" that contain video and audio. If you keep track of where the start of the "video file container" is, you can feed that data to FFmpeg as separate files to be joined. You will not need to recompress the video or audio streams if they came from the same browser and have the same resolution, you should be able to simply "copy" those.
I'm creating a chat application where audio messages can be recorded.
I am creating blobs using react-mic. This is where i run into problems.
Can i just stringify the blob, save it to my DB and then pull it, reverse it and play it with Wavesurfer?
Also I dont think i'm really thinking about this the right way, because the blob:URL is always a local host adress?
Use the "loadBlob" method to play the audio directly.
If you want to store the audio file for later use, simply add a reference URL in your db that points to the audio file within CDN.
Heres how my AudioPlayer component defines how to load the audioData
wavesurfer.current = WaveSurfer.create(options);
if (!audioData) return;
if (typeof audioData === "string") {
wavesurfer.current.load(audioData); // must be a URL
} else {
wavesurfer.current.loadBlob(audioData); //must be a Blob
}
I am trying to learn how to use the input upload tag to take audio files and play them.
In my HTML I have an input:
<h3>Upload Song: <input id="song" type="file" accept="audio/*" oninput="updateSong()"></input></h3>
The idea being that once the song is uploaded from the user's computer, the updateSong() function is called and and the system automatically saves the song as a var in javascript.
This would be done through the updateSong() function:
function updateSong(){
song = document.getElementById("song");
console.log(song)
song.value.play();
}
var song;
Then, once the song is saved, I would like for the song to play - just as a test so I know it works.
However, when I use this code to execute my idea, I get the error:
TypeError: song.value.play is not a function
at updateSong (/script.js:32:14)
at HTMLInputElement.oninput (/:17:91)
What idea am I missing that is causing the code not to run? I established the song, and then update the variable with the song. This seems straightforward, so I'm not sure why it doesn't work.
The file input has a files property, which allows you to enumerate its list of files.
From there, you can use URL.createObjectURL to create a temporary Blob-style object URL which references the file from the user's computer.
With that URL, you can instantiate a new Audio element and start playback. For example:
document.querySelector('input[type="file"]').addEventListener('input', (e) => {
console.log(e.target.files);
if (e.target.files.length) {
const audio = new Audio(
URL.createObjectURL(e.target.files[0])
);
audio.play();
}
});
(JSFiddle: https://jsfiddle.net/6tkxw0aj/)
Don't forget to revoke your object URL later when you're done with it!