How to save blob and play it with wavesurfer - javascript

I'm creating a chat application where audio messages can be recorded.
I am creating blobs using react-mic. This is where i run into problems.
Can i just stringify the blob, save it to my DB and then pull it, reverse it and play it with Wavesurfer?
Also I dont think i'm really thinking about this the right way, because the blob:URL is always a local host adress?

Use the "loadBlob" method to play the audio directly.
If you want to store the audio file for later use, simply add a reference URL in your db that points to the audio file within CDN.
Heres how my AudioPlayer component defines how to load the audioData
wavesurfer.current = WaveSurfer.create(options);
if (!audioData) return;
if (typeof audioData === "string") {
wavesurfer.current.load(audioData); // must be a URL
} else {
wavesurfer.current.loadBlob(audioData); //must be a Blob
}

Related

Web Audio API - How do I save the audio buffer to a file including all changes?

I made changes to an audio buffer like gain and panning, connected them to an audio context.
Now I want to save to a file with all the implemented changes.
Saving the buffer as is would give me the original audio without the changes.
Any idea of a method or a procedure existed to do that?
On way is to use a MediaRecorder to save the modified audio.
So, in addition to connecting to the destination, connect to a MediaStreamDestinationNode. This node has a stream object that you can use to initialize a MediaRecorder. Set up the recorder to save the data when data is available. When you're down recording, you have a blob that you can then download.
Many details are missing here, but you can find out how to use a MediaRecorder using the MDN example.
I found a solution, with OfflineAudioContext.
Here is an example with adding a gain change to my audio and saving it.
On the last line of the code I get the array buffer with the changes I made.
From there, I can go on saving the file.
let offlineCtx = new OfflineAudioContext(this.bufferNode.buffer.numberOfChannels, this.bufferNode.buffer.length, this.bufferNode.buffer.sampleRate);
let obs = offlineCtx.createBufferSource();
obs.buffer = this.buffer;
let gain = offlineCtx.createGain();
gain.gain.value = this.gain.gain.value;
obs.connect(gain).connect(offlineCtx.destination);
obs.start();
let obsRES = this.ctx.createBufferSource();
await offlineCtx.startRendering().then(r => {
obsRES.buffer = r;
});

Playing audio broken into multiple files in webpage

I desire to play an audio-book in my web-page. The audio book is a .zip file, which contains multiple .mp3 files, having one for each chapter of the book. The run time of all the files is several hours, and the their cumulative size is 60MB. The .zip is stored server-side (Express.js)
How can I play each file in succession in the client (in an <audio> element for instance), so that the audio-book plays smoothly, as if 1 file?
Do I need to use a MediaStream object? If so, how?
-Thanks
I'd take a look at this answer on another Stack Overflow question however I have made some modifications in order to match your question:
var audioFileURLs= [];
function preloadAudio(url) {
var audio = new Audio();
// once this file loads, it will call loadedAudio()
// the file will be kept by the browser as cache
audio.addEventListener('canplaythrough', loadedAudio, false);
audio.src = url;
}
var loaded = 0;
function loadedAudio() {
// this will be called every time an audio file is loaded
// we keep track of the loaded files vs the requested files
loaded++;
if (loaded == audioFileURLs.length){
// all have loaded
init();
}
}
var player = document.getElementById('player');
function play(index) {
player.src = audioFiles[index];
player.play();
}
function init() {
// do your stuff here, audio has been loaded
// for example, play all files one after the other
var i = 0;
// once the player ends, play the next one
player.onended = function() {
i++;
if (i >= audioFiles.length) {
// end
return;
}
play(i);
};
// play the first file
play(i);
}
// call node/express server to get a list of links we can hit to retrieve each audio file
fetch('/getAudioUrls/#BookNameOrIdHere#')
.then(r => r.json())
.then(arrayOfURLs => {
audioFileURLs = arrayOfURLs
arrayOfURLs.map(url => preloadAudio(URL))
})
And then just have an audio element on the screen with the id of "player" like <audio id="player"></audio>
With this answer though, the arrayOfURLs array must contain URLs to an API on your server that will open the zip file and return the specified mp3 data. You may also just want to take this answer as a general reference, and not a complete solution because there is optimization to be done. You should probably only load the first audio file at first, and 5 minutes or so before the first file ends you may want to start pre-loading the next and then repeat this process for the entire thing... That all will be up to you but this should hopefully put you on your feet.
You may also run into an issue with the audio element though because it will only show the length of the current audio segment it is on, and not the full length of the audiobook. I would choose to believe this zip file has the book separated by chapter correct? If so you could create a chapter selector, that pretty much allows you to jump to a specific chapter aka getAudioUrls URL.
I hope this helps!
One other note for you... reading your comment on a potential answer down below, you could combine all the audio files into one using some sort of node module (audioconcat is one I found after a quick google search) and return that one file to the client. However, I would not personally take this route because the entire audiobook will be in the server's memory while it combines them, and until it returns it to the client. This could cause some memory issues down the road, so I would avoid it if I could. However, I will admit that this option could be potentially nice because the full length of the audiobook will display in the audio elements timeline.
The best option perhaps is to store the books full length and chapter lengths in a details.json file in the zip file and send that to the client in the first API call along with the URLs to each audio file. This would enable you to build a nice UI.
The only options I can think of is to use either use a javascript mp3 decoder (or compiled a C decoder to asm.js/wasm) and use the audio APIs. Or wrap the mp3 in an mp4 using something like mux.js and use media source extensions to playback.
maybe this will help you
<audio controls="controls">
<source src="track.ogg" type="audio/ogg" />
<source src="track.mp3" type="audio/mpeg" />
Your browser does not support the audio element.
</audio>

Use Blazor to play local audio file in browser

Disclaimer: I am familiar with web technologies but still a newbie.
Scenario: I want user to choose an audio file from local file system. Then I wish to show a small audio control on the webpage to play the selection and send the audio back to the server (after clicking a button).
Problem: Using MatBlazor FileUpload, I am able to get a stream to the local audio file, but I am at a loss on how to use it with the html audio element. Specifically, how can I pass on the audio stream to src in the element?
One clear way to do this in javascript is to use <input type="file"/> element and then use the Filereader() to play the audio. Something like this shown here: Using Filereader.readAsDataURL(), Using URL.createObjectURL()
How can I do this in Blazor, ie play local audio file in browser using stream, the right way?
Current Workaround: For now, I am reading the stream and converting the audio to base64 string and then passing it on to audio element.
The downside of this approach is that for a large audio file of about 18 MB the conversion time is ~30 seconds, and the UI is stuck till then. If I use the javascript way with Interops, then the load time is almost instantaneous, but then I have to use the input element and not the MatFileUpload component. Another reason to have local file stream is because I want to send this audio file to server for further processing and have found it could be done easily using streams.
The code I am using to convert the audio stream to base64 string:
async Task FilesReadyForContent(IMatFileUploadEntry[] files)
{
string base64Audio; // variable defined outside the function to update DOM
bool loadingAudio; // defined outside
try
{
file = files.FirstOrDefault();
if (file == null)
{
base64Audio = "Error! Could not load file";
}
else
{
using (MemoryStream ms = new MemoryStream())
{
loadingAudio = true;
await InvokeAsync(() => this.StateHasChanged());
await file.WriteToStreamAsync(ms);
base64Audio = System.Convert.ToBase64String(ms.ToArray());
loadingAudio = false;
await InvokeAsync(() => this.StateHasChanged());
}
}
}
catch (Exception ex)
{
base64Audio = $"Error! Exception:\r\n{ex.Message}\r\n{ex.StackTrace}";
}
finally
{
await InvokeAsync(() => { this.StateHasChanged(); });
}
}

How Do I Use <input> to save the uploaded to a javascript variable?

I am trying to learn how to use the input upload tag to take audio files and play them.
In my HTML I have an input:
<h3>Upload Song: <input id="song" type="file" accept="audio/*" oninput="updateSong()"></input></h3>
The idea being that once the song is uploaded from the user's computer, the updateSong() function is called and and the system automatically saves the song as a var in javascript.
This would be done through the updateSong() function:
function updateSong(){
song = document.getElementById("song");
console.log(song)
song.value.play();
}
var song;
Then, once the song is saved, I would like for the song to play - just as a test so I know it works.
However, when I use this code to execute my idea, I get the error:
TypeError: song.value.play is not a function
at updateSong (/script.js:32:14)
at HTMLInputElement.oninput (/:17:91)
What idea am I missing that is causing the code not to run? I established the song, and then update the variable with the song. This seems straightforward, so I'm not sure why it doesn't work.
The file input has a files property, which allows you to enumerate its list of files.
From there, you can use URL.createObjectURL to create a temporary Blob-style object URL which references the file from the user's computer.
With that URL, you can instantiate a new Audio element and start playback. For example:
document.querySelector('input[type="file"]').addEventListener('input', (e) => {
console.log(e.target.files);
if (e.target.files.length) {
const audio = new Audio(
URL.createObjectURL(e.target.files[0])
);
audio.play();
}
});
(JSFiddle: https://jsfiddle.net/6tkxw0aj/)
Don't forget to revoke your object URL later when you're done with it!

HTML5 Video: Streaming Video with Blob URLs

I have an array of Blobs (binary data, really -- I can express it however is most efficient. I'm using Blobs for now but maybe a Uint8Array or something would be better). Each Blob contains 1 second of audio/video data. Every second a new Blob is generated and appended to my array. So the code roughly looks like so:
var arrayOfBlobs = [];
setInterval(function() {
arrayOfBlobs.append(nextChunk());
}, 1000);
My goal is to stream this audio/video data to an HTML5 element. I know that a Blob URL can be generated and played like so:
var src = URL.createObjectURL(arrayOfBlobs[0]);
var video = document.getElementsByTagName("video")[0];
video.src = src;
Of course this only plays the first 1 second of video. I also assume I can trivially concatenate all of the Blobs currently in my array somehow to play more than one second:
// Something like this (untested)
var concatenatedBlob = new Blob(arrayOfBlobs);
var src = ...
However this will still eventually run out of data. As Blobs are immutable, I don't know how to keep appending data as it's received.
I'm certain this should be possible because YouTube and many other video streaming services utilize Blob URLs for video playback. How do they do it?
Solution
After some significant Googling I managed to find the missing piece to the puzzle: MediaSource
Effectively the process goes like this:
Create a MediaSource
Create an object URL from the MediaSource
Set the video's src to the object URL
On the sourceopen event, create a SourceBuffer
Use SourceBuffer.appendBuffer() to add all of your chunks to the video
This way you can keep adding new bits of video without changing the object URL.
Caveats
The SourceBuffer object is very picky about codecs. These have to be declared, and must be exact, or it won't work
You can only append one blob of video data to the SourceBuffer at a time, and you can't append a second blob until the first one has finished (asynchronously) processing
If you append too much data to the SourceBuffer without calling .remove() then you'll eventually run out of RAM and the video will stop playing. I hit this limit around 1 hour on my laptop
Example Code
Depending on your setup, some of this may be unnecessary (particularly the part where we build a queue of video data before we have a SourceBuffer then slowly append our queue using updateend). If you are able to wait until the SourceBuffer has been created to start grabbing video data, your code will look much nicer.
<html>
<head>
</head>
<body>
<video id="video"></video>
<script>
// As before, I'm regularly grabbing blobs of video data
// The implementation of "nextChunk" could be various things:
// - reading from a MediaRecorder
// - reading from an XMLHttpRequest
// - reading from a local webcam
// - generating the files on the fly in JavaScript
// - etc
var arrayOfBlobs = [];
setInterval(function() {
arrayOfBlobs.append(nextChunk());
// NEW: Try to flush our queue of video data to the video element
appendToSourceBuffer();
}, 1000);
// 1. Create a `MediaSource`
var mediaSource = new MediaSource();
// 2. Create an object URL from the `MediaSource`
var url = URL.createObjectURL(mediaSource);
// 3. Set the video's `src` to the object URL
var video = document.getElementById("video");
video.src = url;
// 4. On the `sourceopen` event, create a `SourceBuffer`
var sourceBuffer = null;
mediaSource.addEventListener("sourceopen", function()
{
// NOTE: Browsers are VERY picky about the codec being EXACTLY
// right here. Make sure you know which codecs you're using!
sourceBuffer = mediaSource.addSourceBuffer("video/webm; codecs=\"opus,vp8\"");
// If we requested any video data prior to setting up the SourceBuffer,
// we want to make sure we only append one blob at a time
sourceBuffer.addEventListener("updateend", appendToSourceBuffer);
});
// 5. Use `SourceBuffer.appendBuffer()` to add all of your chunks to the video
function appendToSourceBuffer()
{
if (
mediaSource.readyState === "open" &&
sourceBuffer &&
sourceBuffer.updating === false
)
{
sourceBuffer.appendBuffer(arrayOfBlobs.shift());
}
// Limit the total buffer size to 20 minutes
// This way we don't run out of RAM
if (
video.buffered.length &&
video.buffered.end(0) - video.buffered.start(0) > 1200
)
{
sourceBuffer.remove(0, video.buffered.end(0) - 1200)
}
}
</script>
</body>
</html>
As an added bonus this automatically gives you DVR functionality for live streams, because you're retaining 20 minutes of video data in your buffer (you can seek by simply using video.currentTime = ...)
Adding to the previous answer...
make sure to add sourceBuffer.mode = 'sequence' in the MediaSource.onopen event handler to ensure the data is appended based on the order it is received. The default value is segments, which buffers until the next 'expected' timeframe is loaded.
Additionally, make sure that you are not sending any packets with a data.size === 0, and make sure that there is 'stack' by clearing the stack on the broadcasting side, unless you are wanting to record it as an entire video, in which case just make sure the size of the broadcast video is small enough, and that your internet speed is fast. The smaller and lower the resolution the more likely you can keep a realtime connection with a client, ie a video call.
For iOS the broadcast needs to made from a iOS/macOS application, and be in mp4 format. The video chunk gets saved to the app's cache and then removed once it is sent to the server. A client can connect to the stream using either a web browser or app across nearly any device.

Categories