JavaScript Recorded audio has unknown duration - javascript

okay so I successfully created a structure to record and download audio. But the problem is the final downloaded file has unknown duration. is there any way to work around that??
Here is my code (it's in Typescript)
let recorder: MediaRecorder,
dataArray: Blob[]
async function InitializeAudio(): Promise<void> {
let audioIN = { audio: true }
try {
const mediastremObj = await navigator.mediaDevices.getUserMedia(audioIN)
const mediaRecorder = new MediaRecorder(mediastremObj)
recorder = mediaRecorder
recorder.ondataavailable = (ev)=>{
dataArray.push(ev.data)
}
recorder.onstop = (ev) => {
const downloadTag = document.querySelector('#file-download') as HTMLAnchorElement
const audioFile = new Blob(dataArray, {'type': 'audio/mp3;'})
const url = window.URL.createObjectURL(audioFile)
downloadTag.href = url
downloadTag.download = `my-${Date.now()}-audio.mp3`
downloadTag.click()
window.URL.revokeObjectURL(url)
dataArray = []
}
} catch (e) {
throw new Error(e)
}
}

okay so after weeks of searching and researching, i found out that the media recorder API can only record in mime types supported by the browser. in chrome browser the only supported mime type is audio/webm, and the generated Blob can't be converted to any other mimetype. i was able to find a lasting solution using wavesufer & lamejs libraries.
link to them πŸ‘‡πŸ‘‡
https://www.npmjs.com/package/wavesurfer.js?activeTab=readme
https://www.npmjs.com/package/lamejs
you might have an error using the lame js library, so here is a link to a refactored fork of the lame js projectπŸ‘‡πŸ‘‡
https://www.npmjs.com/package/lamejstmp

Related

Javascript MediaRecorder audio recording corrupt

I am struggeling to get record audio in the browser and make it work properly on mobile as well as desktop.
I am using MediaRecorder to start the recording and I want to send it as a file to my Flask server through a form. However, what I receive is a corrupt file, that sometimes plays on my desktop, but not on my mobile phone. I think it is connected to different mimeTypes that are supported and how the blob gets converted.
Here is the JavaScript Code:
function record_audio(){
if(state == "empty"){
navigator.mediaDevices.getUserMedia({ audio: true })
.then(stream => {
mediaRecorder = new MediaRecorder(stream);
mediaRecorder.start();
state = "recording";
document.getElementById('stop_btn').style.display = 'block'
seconds_int = setInterval(
function () {
document.getElementById("record_btn").innerHTML = seconds_rec + " s";
seconds_rec += 1;
}, 1000);
mediaRecorder.addEventListener("dataavailable", event => {
audioChunks.push(event.data);
if(mediaRecorder.state == 'inactive') makeLink();
});
}
}
function makeLink(){
const audioBlob = new Blob(audioChunks, {type: 'audio/mpeg'});
const audioUrl = URL.createObjectURL(audioBlob);
var sound = document.createElement('audio');
sound.id = 'audio-player';
sound.controls = 'controls';
sound.src = audioUrl;
console.log(audioBlob)
sound.type = 'audio/mpeg';
document.getElementById("audio-player-container").innerHTML = sound.outerHTML;
let file = new File([audioBlob], "audio.mp3",{ type:"audio/mpeg",lastModifiedDate: new Date()});
let container = new DataTransfer();
container.items.add(file);
document.getElementById("uploadedFile").files = container.files;
};
Thanks for your help!
The audio that you recorded is most likely not of type 'audio/mpeg'. No browser supports that out of the box.
If you call new MediaRecorder(stream) without the optional second argument the browser will pick the codec it likes best. You can use the mimeType property to find out which codec is used by the browser. It can for example be used to construct the Blob.
const audioBlob = new Blob(
audioChunks,
{
type: mediaRecorder.mimeType
}
);
You would also need to use it in a similar way when creating the File. And you probably also need to adapt your backend logic to handle files which aren't MP3s.

JS MediaRecorder API exports a non seekable WebM file

I am working on a video editor, and the video is rendered using the canvas, so I use the JS MediaRecorder API, but I have run into an odd problem, where, because the MediaRecorder API is primarily designed for live streams, my exported WebM file doesn't show how long it is until it's done, which is kinda annoying.
This is the code I am using:
function exportVideo() {
const stream = preview.captureStream();
const dest = audioContext.createMediaStreamDestination();
const sources = []
.concat(...layers.map((layer) => layer.addAudioTracksTo(dest)))
.filter((source) => source);
// exporting doesn't work if there's no audio and it adds the tracks
if (sources.length) {
dest.stream.getAudioTracks().forEach((track) => stream.addTrack(track));
}
const recorder = new MediaRecorder(stream, {
mimeType: usingExportType,
videoBitsPerSecond: exportBitrate * 1000000,
});
let download = true;
recorder.addEventListener("dataavailable", (e) => {
const newVideo = document.createElement("video");
exportedURL = URL.createObjectURL(e.data);
if (download) {
const saveLink = document.createElement("a");
saveLink.href = exportedURL;
saveLink.download = "video-export.webm";
document.body.appendChild(saveLink);
saveLink.click();
document.body.removeChild(saveLink);
}
});
previewTimeAt(0, false);
return new Promise((res) => {
recorder.start();
audioContext.resume().then(() => play(res));
}).then((successful) => {
download = successful;
recorder.stop();
sources.forEach((source) => {
source.disconnect(dest);
});
});
}
And if this is too vague, please tell me what is vague about it.
Thanks!
EDIT: Narrowed down the problem, this is a chrome bug, see https://bugs.chromium.org/p/chromium/issues/detail?id=642012. I discovered a library called https://github.com/legokichi/ts-ebml that may be able to make the webm seekable, but unfortunately, this is a javascript project, and I ain't setting up Typescript.
JS MediaRecorder API exports a non seekable WebM file
Yes, it does. It's in the nature of streaming.
In order to make that sort of stream seekable you need to post process it. There's a npm embl library pre-typescript if you want to attempt it.

JavaScript: Use MediaRecorder to record streams from <video> but failed

I'm trying to record parts of the video from a tag, save it for later use. And I found this article: Recording a media element, which described a method by first calling stream = video.captureStream(), then use new MediaRecord(stream) to get a recorder.
I've tested on some demos, the MediaRecorder works fine if stream is from user's device (such as microphone). However, when it comes to media element, my FireFox browser throws an exception: MediaRecorder.start: The MediaStream's isolation properties disallow access from MediaRecorder.
So any idea on how to deal with it?
Browser: Firefox
The page (including the js file) is stored at local.
The src attribute of <video> tag could either be a file from local storage or a url from Internet.
Code snippets:
let chunks = [];
let getCaptureStream = function () {
let stream;
const fps = 0;
if (video.captureStream) {
console.log("use captureStream");
stream = video.captureStream(fps);
} else if (video.mozCaptureStream) {
console.log("use mozCaptureStream");
stream = video.mozCaptureStream(fps);
} else {
console.error('Stream capture is not supported');
stream = null;
}
return stream;
}
video.addEventListener('play', () => {
let stream = getCaptureStream();
const mediaRecorder = new MediaRecorder(stream);
mediaRecorder.onstop = function() {
const newVideo = document.createElement('video');
newVideo.setAttribute('controls', '');
newVideo.controls = true;
const blob = new Blob(chunks);
chunks = [];
const videoURL = window.URL.createObjectURL(blob, { 'type' : 'video/mp4; codecs="avc1.42E01E, mp4a.40.2"' });
newVideo.src = videoURL;
document.body.appendChild(video);
}
mediaRecorder.ondataavailable = function(e) {
chunks.push(e.data);
}
stopButton.onclick = function() {
mediaRecorder.stop()
}
mediaRecorder.start(); // This is the line triggers exception.
});
I found the solution myself.
When I turned to Chrome, it shows that a CORS issue limits me from even playing original video. So I guess it's because the secure strategy that preventing MediaRecorder from accessing MediaStreams. Therefore, I deployed the local files to a local server with instruction on this page.
After that, the MediaRecorder started working. Hope this will help someone in need.
But still, the official document doesn't seem to mention much about isolation properties of media elements. So any idea or further explanation is welcomed.

How to record microphone audio in JavaScript and submit to DialogFlow?

How can I record audio from the microphone in JavaScript and submit it to DialogFlow, without going through a server?
There are two parts to this question:
How to record microphone audio in a format DialogFlow will understand.
How to actually submit that audio to DialogFlow, with proper authentication.
Part 1
For recording microphone audio in a format DialogFlow will understand, I use opus-recorder, then convert the blob it returns using the code below:
function BlobToDataURL(blob: Blob) {
return new Promise((resolve, reject)=>{
const reader = new FileReader();
reader.addEventListener("loadend", e=>resolve(reader.result as string));
reader.readAsDataURL(blob);
}) as Promise<string>;
}
const micRecorder = new Recorder({
encoderSampleRate: 16000,
originalSampleRateOverride: 16000, // necessary due to Google bug? (https://github.com/chris-rudmin/opus-recorder/issues/191#issuecomment-509426093)
encoderPath: PATH_TO_ENCODER_WORKER_JS,
});
micRecorder.ondataavailable = async typedArray=>{
const audioData = new Blob([typedArray], {type: "audio/ogg"});
const audioData_dataURL = await BlobToDataURL(audioData);
const audioData_str = audioData_dataURL.replace(/^data:.+?base64,/, "");
// here is where you need part 2, to actually submit the audio to DialogFlow
};
micRecorder.start();
Part 2
To submit the audio-data to DialogFlow, see my answer here: https://stackoverflow.com/a/57857698/2441655

Downloading an Azure Storage Blob using pure JavaScript and Azure-Storage-Js

I'm trying to do this with just pure Javascript and the SDK. I am not using Node.js. I'm converting my application from v2 to v10 of the SDK azure-storage-js-v10
The azure-storage.blob.js bundled file is compatible with UMD
standard, if no module system is found, following global variable
will be exported: azblob
My code is here:
const serviceURL = new azblob.ServiceURL(`https://${account}.blob.core.windows.net${accountSas}`, pipeline);
const containerName = "container";
const containerURL = azblob.ContainerURL.fromServiceURL(serviceURL, containerName);
const blobURL = azblob.BlobURL.fromContainerURL(containerURL, blobName);
const downloadBlobResponse = await blobURL.download(azblob.Aborter.none, 0);
The downloadBlobResponse looks like this:
downloadBlobResponse
Using v10, how can I convert the downloadBlobResponse into a new blob so it can be used in the FileSaver saveAs() function?
In azure-storage-js-v2 this code worked on smaller files:
let readStream = blobService.createReadStream(containerName, blobName, (err, res) => {
if (error) {
// Handle read blob error
}
});
// Use event listener to receive data
readStream.on('data', data => {
// Uint8Array retrieved
// Convert the array back into a blob
var newBlob = new Blob([new Uint8Array(data)]);
// Saves file to the user's downloads directory
saveAs(newBlob, blobName); // FileSaver.js
});
I've tried everything to get v10 working, any help would be greatly appreciated.
Thanks,
You need to get the body by await blobBody.
downloadBlobResponse = await blobURL.download(azblob.Aborter.none, 0);
// data is a browser Blob type
const data = await downloadBlobResponse.blobBody;
Thanx Mike Coop and Xiaoning Liu!
I was busy making a Vuejs plugin to download blobs from a storage account. Thanx to you, I was able to make this work.
var FileSaver = require('file-saver');
const { BlobServiceClient } = require("#azure/storage-blob");
const downloadButton = document.getElementById("download-button");
const downloadFiles = async() => {
try {
if (fileList.selectedOptions.length > 0) {
reportStatus("Downloading files...");
for await (const option of fileList.selectedOptions) {
var blobName = option.text;
const account = '<account name>';
const sas = '<blob sas token>';
const containerName = '< container name>';
const blobServiceClient = new BlobServiceClient(`https://${account}.blob.core.windows.net${sas}`);
const containerClient = blobServiceClient.getContainerClient(containerName);
const blobClient = containerClient.getBlobClient(blobName);
const downloadBlockBlobResponse = await blobClient.download(blobName, 0, undefined);
const data = await downloadBlockBlobResponse.blobBody;
// Saves file to the user's downloads directory
FileSaver.saveAs(data, blobName); // FileSaver.js
}
reportStatus("Done.");
listFiles();
} else {
reportStatus("No files selected.");
}
} catch (error) {
reportStatus(error.message);
}
};
downloadButton.addEventListener("click", downloadFiles);
Thanks Xiaoning Liu!
I'm still learning about async javascript functions and promises. Guess I was just missing another "await". I saw that "downloadBlobResponse.blobBody" was a promise and also a blob type, but, I couldn't figure out why it wouldn't convert to a new blob. I kept getting the "Iterator getter is not callable" error.
Here's my final working solution:
// Create a BlobURL
const blobURL = azblob.BlobURL.fromContainerURL(containerURL, blobName);
// Download blob
downloadBlobResponse = await blobURL.download(azblob.Aborter.none, 0);
// In browsers, get downloaded data by accessing downloadBlockBlobResponse.blobBody
const data = await downloadBlobResponse.blobBody;
// Saves file to the user's downloads directory
saveAs(data, blobName); // FileSaver.js

Categories