I have an audio blob and I want to chop it up at a specific time.How should I do that in Javascript ?
Example:
sliceAudioBlob( audio_blob, 0, 10000 ); // Time in milliseconds [ start = 0, end = 10000 ]
Note: I have no clue how to do that, so a little hint would be really
appreciated.
Update :
I'm trying to build a simple audio recorder, but the problem is that there are differences in time duration for each browser, some of them adds few seconds ( Firefox ) and others don't ( Chrome ). So, I came up with the idea to code a method that returns only the slice I want.
Full HTML code :
<!DOCTYPE html>
<html>
<head>
<meta charset="UTF-8">
<title>Audio Recorder</title>
<style>
audio{
display: block;
}
</style>
</head>
<body>
<button type="button" onclick="mediaRecorder.start(1000)">Start</button>
<button type="button" onclick="mediaRecorder.stop()">Stop</button>
<script type="text/javascript">
var mediaRecorder = null,
chunks = [],
max_duration = 10000;// in milliseconds.
function onSuccess( stream ) {
mediaRecorder = new MediaRecorder( stream );
mediaRecorder.ondataavailable = function( event ) {
// chunks.length is the number of recorded seconds
// since every chunk is 1 second duration.
if ( chunks.length < max_duration / 1000 ) {
chunks.push( event.data );
} else {
if (mediaRecorder.state === 'recording') {
mediaRecorder.stop();
}
}
}
mediaRecorder.onstop = function() {
var audio = document.createElement('audio'),
audio_blob = new Blob(chunks, {
'type' : 'audio/mpeg'
});
audio.controls = 'controls';
audio.autoplay = 'autoplay';
audio.src = window.URL.createObjectURL( audio_blob );
document.body.appendChild(audio);
};
}
var onError = function(err) {
console.log('Error: ' + err);
}
navigator.mediaDevices.getUserMedia({ audio: true }).then(onSuccess, onError);
</script>
</body>
</html>
There is no straight forward way to slice an Audio media like that, and this is because your file is made of more than sound signal: there are multiple segments in it, with some headers etc, which position can't be determined just by a byteLength. It is like you can't crop a jpeg image just by getting its x firsts bytes.
There might be ways using the Web Audio API to convert your media File to an AudioBuffer, and then slicing this AudioBuffer's raw PCM data as you wish before packing it back in a media File with the correct new descriptors, but I think you are facing an X-Y problem and if I got it correctly, there is a simple way to fix this X problem.
Indeed, the problem you describe is that either Chrome or Firefox doesn't produce an 10s media from your code.
But that is because you are relying on the timeslice argument of MediaRecorder.start(timeslice) to give you chunks of perfect time.
It won't. This argument should only be understood as a clue you are giving to the browser, but they may well impose their own minimum timeslice and thus not respect your argument. (2.3[Methods].5.4).
Instead, you'll be better using a simple setTimeout to trigger your recorder's stop() method when you want:
start_btn.onclick = function() {
mediaRecorder.start(); // we don't even need timeslice
// now we'll get similar max duration in every browsers
setTimeout(stopRecording, max_duration);
};
stop_btn.onclick = stopRecording;
function stopRecording() {
if (mediaRecorder.state === "recording")
mediaRecorder.stop();
};
Here is a live example using gUM hosted on jsfiddle.
And a live snippet using a silent stream from the Web Audio API because StackSnippet's protection doesn't run well with gUM...
var start_btn = document.getElementById('start'),
stop_btn = document.getElementById('stop');
var mediaRecorder = null,
chunks = [],
max_duration = 10000; // in milliseconds.
start_btn.onclick = function() {
mediaRecorder.start(); // we don't even need timeslice
// now we'll get similar max duration in every browsers
setTimeout(stopRecording, max_duration);
this.disabled = !(stop_btn.disabled = false);
};
stop_btn.onclick = stopRecording;
function stopRecording() {
if (mediaRecorder.state === "recording")
mediaRecorder.stop();
stop_btn.disabled = true;
};
function onSuccess(stream) {
mediaRecorder = new MediaRecorder(stream);
mediaRecorder.ondataavailable = function(event) {
// simply always push here, the stop will be controlled by setTimeout
chunks.push(event.data);
}
mediaRecorder.onstop = function() {
var audio_blob = new Blob(chunks);
var audio = new Audio(URL.createObjectURL(audio_blob));
audio.controls = 'controls';
document.body.appendChild(audio);
// workaround https://crbug.com/642012
audio.currentTime = 1e12;
audio.onseeked = function() {
audio.onseeked = null;
console.log(audio.duration);
audio.currentTime = 0;
audio.play();
}
};
start_btn.disabled = false;
}
var onError = function(err) {
console.log('Error: ' + err);
}
onSuccess(SilentStream());
function SilentStream() {
var ctx = new(window.AudioContext || window.webkitAudioContext),
gain = ctx.createGain(),
dest = ctx.createMediaStreamDestination();
gain.connect(dest);
return dest.stream;
}
<button id="start" disabled>start</button>
<button id="stop" disabled>stop</button>
In your line:
<button type="button" onclick="mediaRecorder.start(1000)">Start</button>
mediaRecorder.start receive as a parameter the timeslice. The timeslice specifies the size in milliseconds of the chunks. So in order to cut your audio you should modify the chunk array that you have been creating on mediaRecorder.ondataavailable
E.g:
You pass 1000 as the timeslice that means that you have slices of 1 second, and you want to cut the first 2 seconds of the recording.
You just have to do something like this:
mediaRecorder.onstop = function() {
//Remove first 2 seconds of the audio
var chunksSliced = chunks.slice(2);
var audio = document.createElement('audio'),
// create the audio from the sliced chunk
audio_blob = new Blob(chunksSliced, {
'type' : 'audio/mpeg'
});
audio.controls = 'controls';
audio.autoplay = 'autoplay';
audio.src = window.URL.createObjectURL( audio_blob );
document.body.appendChild(audio);
};
}
You can reduce the size of the chunks in milliseconds if needed. Just pass different number to start and slice the array where you want.
To have a more specific answer you could do this for audioSlice:
const TIMESLICE = 1000;
// #param chunks Array with the audio chunks
// #param start where to start cutting in seconds
// #param end where to stop cutting in seconds
function audioSlice(chunks, start, end) {
const timeSliceToSeconds = TIMESLICE/1000;
const startIndex = Math.round(start / timeSliceToSeconds);
const endIndex = Math.round(end / timeSliceToSeconds);
if (startIndex < chunks.length && endIndex < chunks.length) {
return chunks.slice(startIndex, endIndex)
}
throw Error('You cannot cut this at those points');
}
If you modify TIMESLICE for your value, it will calculate in which place to cut when passed in seconds
Try to collect timeSliced chunks on pause, then filter these out:
if(this.isPaused){
this.deleteChunks.push(...audioChunks)
}
const pushChunks = _.filter(audioChunks, chunk=> _.indexOf(this.deleteChunks, chunk) > -1 ? false : true)
console.log("audioChunks::", this.deleteChunks, pushChunks.length, audioChunks.length)
Related
I am attempting to add silence before and after an audio file using javascript. My idea was to add zero values at the beginning and end of the audio. both of the methods I tried failed
Method 1, playback audio via AudioContext
const context = new AudioContext()
async function createSoundArray() {
await fetch('file.mp3')
.then(response => response.arrayBuffer())
.then(arrayBuffer => context.decodeAudioData(arrayBuffer))
.then(audioBuffer => {
const data = audioBuffer.getChannelData(0) // Float32Array(100000)
editAudio(data)
});
}
createSoundArray()
function editAudio(data){
const editedData = []
for(var i = 0; i < data.length + 50000; i++){// adds 25000 0 values (silence) to the begginning and end of array
if( i > 25000 && i <= 125000){
editedData.push(data[i - 25000])
}else{
editedData.push(0)
}
}
console.log(editedData) // array(150000)
const Uint32 = Uint32Array.from(values)
const audioCtx = new AudioContext()
console.log(Uint32.buffer)// ArrayBuffer(150000)
audioCtx.decodeAudioData(pimbo.buffer, function (buf) { //error could not decode
const playback = audioCtx.createBufferSource();
playback.buffer = buf;
playback.connect(audioCtx.destination);
audioCtx.resume()
console.log(playback.buffer)
});
}
This method resulted in the following error:
The buffer passed to decodeAudioData contains an unknown content type.
Method 2, Playback audio via audio element
const context = new AudioContext()
async function createSoundArray() {
await fetch('file.mp3')
.then(response => response.arrayBuffer())
.then(arrayBuffer => context.decodeAudioData(arrayBuffer))
.then(audioBuffer => {
const data = audioBuffer.getChannelData(0) // Float32Array(100000)
editAudio(data)
});
}
createSoundArray()
function editAudio(data){
const editedData = []
for(var i = 0; i < data.length + 50000; i++){// adds 25000 0 values (silence) to the begginning and end of array
if( i > 25000 && i <= 125000){
editedData.push(data[i - 25000])
}else{
editedData.push(0)
}
}
const blob = new Blob(editedData)
const audioUrl = URL.createObjectURL(audioBlob);
const audio = new Audio(audioUrl);
audio.play()//error no supported source found
}
This method resulted in the following error:
Failed to play audio because no supported source was found
Because Both of these methods failed, this leads me to believe that the zeros added are causing the data to not be recognized as audio. However I don't understand why this is because I am simply adding zeros which are values that already exist inside the audio data before editing it. Does anyone know what I am doing wrong or why this is not working?
I am in the process of replacing RecordRTC with the built in MediaRecorder for recording audio in Chrome. The recorded audio is then played in the program with audio api. I am having trouble getting the audio.duration property to work. It says
If the video (audio) is streamed and has no predefined length, "Inf" (Infinity) is returned.
With RecordRTC, I had to use ffmpeg_asm.js to convert the audio from wav to ogg. My guess is somewhere in the process RecordRTC sets the predefined audio length. Is there any way to set the predefined length using MediaRecorder?
This is a chrome bug.
FF does expose the duration of the recorded media, and if you do set the currentTimeof the recorded media to more than its actual duration, then the property is available in chrome...
var recorder,
chunks = [],
ctx = new AudioContext(),
aud = document.getElementById('aud');
function exportAudio() {
var blob = new Blob(chunks);
aud.src = URL.createObjectURL(new Blob(chunks));
aud.onloadedmetadata = function() {
// it should already be available here
log.textContent = ' duration: ' + aud.duration;
// handle chrome's bug
if (aud.duration === Infinity) {
// set it to bigger than the actual duration
aud.currentTime = 1e101;
aud.ontimeupdate = function() {
this.ontimeupdate = () => {
return;
}
log.textContent += ' after workaround: ' + aud.duration;
aud.currentTime = 0;
}
}
}
}
function getData() {
var request = new XMLHttpRequest();
request.open('GET', 'https://upload.wikimedia.org/wikipedia/commons/4/4b/011229beowulf_grendel.ogg', true);
request.responseType = 'arraybuffer';
request.onload = decodeAudio;
request.send();
}
function decodeAudio(evt) {
var audioData = this.response;
ctx.decodeAudioData(audioData, startRecording);
}
function startRecording(buffer) {
var source = ctx.createBufferSource();
source.buffer = buffer;
var dest = ctx.createMediaStreamDestination();
source.connect(dest);
recorder = new MediaRecorder(dest.stream);
recorder.ondataavailable = saveChunks;
recorder.onstop = exportAudio;
source.start(0);
recorder.start();
log.innerHTML = 'recording...'
// record only 5 seconds
setTimeout(function() {
recorder.stop();
}, 5000);
}
function saveChunks(evt) {
if (evt.data.size > 0) {
chunks.push(evt.data);
}
}
// we need user-activation
document.getElementById('button').onclick = function(evt){
getData();
this.remove();
}
<button id="button">start</button>
<audio id="aud" controls></audio><span id="log"></span>
So the advice here would be to star the bug report so that chromium's team takes some time to fix it, even if this workaround can do the trick...
Thanks to #Kaiido for identifying bug and offering the working fix.
I prepared an npm package called get-blob-duration that you can install to get a nice Promise-wrapped function to do the dirty work.
Usage is as follows:
// Returns Promise<Number>
getBlobDuration(blob).then(function(duration) {
console.log(duration + ' seconds');
});
Or ECMAScript 6:
// yada yada async
const duration = await getBlobDuration(blob)
console.log(duration + ' seconds')
A bug in Chrome, detected in 2016, but still open today (March 2019), is the root cause behind this behavior. Under certain scenarios audioElement.duration will return Infinity.
Chrome Bug information here and here
The following code provides a workaround to avoid the bug.
Usage : Create your audioElement, and call this function a single time, providing a reference of your audioElement. When the returned promise resolves, the audioElement.duration property should contain the right value. ( It also fixes the same problem with videoElements )
/**
* calculateMediaDuration()
* Force media element duration calculation.
* Returns a promise, that resolves when duration is calculated
**/
function calculateMediaDuration(media){
return new Promise( (resolve,reject)=>{
media.onloadedmetadata = function(){
// set the mediaElement.currentTime to a high value beyond its real duration
media.currentTime = Number.MAX_SAFE_INTEGER;
// listen to time position change
media.ontimeupdate = function(){
media.ontimeupdate = function(){};
// setting player currentTime back to 0 can be buggy too, set it first to .1 sec
media.currentTime = 0.1;
media.currentTime = 0;
// media.duration should now have its correct value, return it...
resolve(media.duration);
}
}
});
}
// USAGE EXAMPLE :
calculateMediaDuration( yourAudioElement ).then( ()=>{
console.log( yourAudioElement.duration )
});
Thanks #colxi for the actual solution, I've added some validation steps (As the solution was working fine but had problems with long audio files).
It took me like 4 hours to get it to work with long audio files turns out validation was the fix
function fixInfinity(media) {
return new Promise((resolve, reject) => {
//Wait for media to load metadata
media.onloadedmetadata = () => {
//Changes the current time to update ontimeupdate
media.currentTime = Number.MAX_SAFE_INTEGER;
//Check if its infinite NaN or undefined
if (ifNull(media)) {
media.ontimeupdate = () => {
//If it is not null resolve the promise and send the duration
if (!ifNull(media)) {
//If it is not null resolve the promise and send the duration
resolve(media.duration);
}
//Check if its infinite NaN or undefined //The second ontime update is a fallback if the first one fails
media.ontimeupdate = () => {
if (!ifNull(media)) {
resolve(media.duration);
}
};
};
} else {
//If media duration was never infinity return it
resolve(media.duration);
}
};
});
}
//Check if null
function ifNull(media) {
if (media.duration === Infinity || media.duration === NaN || media.duration === undefined) {
return true;
} else {
return false;
}
}
//USAGE EXAMPLE
//Get audio player on html
const AudioPlayer = document.getElementById('audio');
const getInfinity = async () => {
//Await for promise
await fixInfinity(AudioPlayer).then(val => {
//Reset audio current time
AudioPlayer.currentTime = 0;
//Log duration
console.log(val)
})
}
I wrapped the webm-duration-fix package to solve the webm length problem, which can be used in nodejs and web browsers to support video files over 2GB with not too much memory usage.
Usage is as follows:
import fixWebmDuration from 'webm-duration-fix';
const mimeType = 'video/webm\;codecs=vp9';
const blobSlice: BlobPart[] = [];
mediaRecorder = new MediaRecorder(stream, {
mimeType
});
mediaRecorder.ondataavailable = (event: BlobEvent) => {
blobSlice.push(event.data);
}
mediaRecorder.onstop = async () => {
// fix blob, support fix webm file larger than 2GB
const fixBlob = await fixWebmDuration(new Blob([...blobSlice], { type: mimeType }));
// to write locally, it is recommended to use fs.createWriteStream to reduce memory usage
const fileWriteStream = fs.createWriteStream(inputPath);
const blobReadstream = fixBlob.stream();
const blobReader = blobReadstream.getReader();
while (true) {
let { done, value } = await blobReader.read();
if (done) {
console.log('write done.');
fileWriteStream.close();
break;
}
fileWriteStream.write(value);
value = null;
}
blobSlice = [];
};
//If you want to modify the video file completely, you can use this package "webmFixDuration", Other methods are applied at the display level only on the video tag With this method, the complete video file is modified
webmFixDuration github example
mediaRecorder.onstop = async () => {
const duration = Date.now() - startTime;
const buggyBlob = new Blob(mediaParts, { type: 'video/webm' });
const fixedBlob = await webmFixDuration(buggyBlob, duration);
displayResult(fixedBlob);
};
I am setting an Audio element's currentTime = 0, but it always emit the audio ended event, and currentTime is always equal to the duration.
audio.addEventListener('loadedmetadata', (e) => {
const that = this;
const audio = this.oAudio;
const duration = audio.duration;
if (duration === Infinity) {
audio.currentTime = 1e101;
audio.ontimeupdate = function() {
audio.ontimeupdate = () => {
};
audio.currentTime = 0;
that.duration();
};
}
}, false);
Because your Media lasts less than 1e+101 seconds.
I'm not entirely sure what it is you are trying to do here, nor why it is a problem that this ended event fires, but it sounds like you are trying to apply this workaround to get the correct duration of some medias.
If it is the case, then you would have to wait until you received that duration before attaching the ended event (and even before doing anything else with that MediaElement).
as pseudo-code that would be
// getMediaDuration is an asynchronous task
const duration = await getMediaDuration(audio);
// now that the async part is done we can add our listeners
audio.addEventListener('ended', dosomething);
I seem to have a very strange problem. I am trying to play a video which is being streamed live using a web browser. For this, I am looking at the MediaSource object. I have got it in a way so that the video is taken from a server in chunks to be played. The problem is that the first chunk plays correctly then playback stops.
To make this even more strange, if I put the computer to sleep after starting streaming, then wake it up andthe video will play as expected.
Some Notes:
I am currently using chrome.
I have tried both of them ,with and without calling MediaSource's endOfStream.
var VF = 'video/webm; codecs="vp8,opus"';
var FC = 0;
alert(MediaSource.isTypeSupported(VF));
var url = window.URL || window.webkitURL;
var VSRC = new MediaSource();
var VURL = URL.createObjectURL(VSRC);
var bgi, idx = 1;
var str, rec, dat = [], datb, brl;
var sbx;
//connect the mediasource to the <video> elment first.
vid2.src = VURL;
VSRC.addEventListener("sourceopen", function () {
// alert(VSRC.readyState);
//Setup the source only once.
if (VSRC.sourceBuffers.length == 0) {
var sb = VSRC.addSourceBuffer(VF);
sb.mode = 'sequence';
sb.addEventListener("updateend", function () {
VSRC.endOfStream();
});
sbx = sb;
}
});
//This function will be called each time we get more chunks from the stream.
dataavailable = function (e) {
//video is appended to the sourcebuffer, but does not play in video element
//Unless the computer is put to sleep then awaken!?
sbx.appendBuffer(e.result);
FC += 1;
//These checks behave as expected.
len.innerHTML = "" + sbx.buffered.length + "|" + VSRC.duration;
CTS.innerHTML = FC;
};
You are making two big mistakes:
You can only call sbx.appendBuffer when sbx.updating property is false, otherwise appendBuffer will fail. So what you need to do in reality is have a queue of chunks, and add a chunk to the queue if sbx.updating is true:
if (sbx.updating || segmentsQueue.length > 0)
segmentsQueue.push(e.result);
else
sbx.appendBuffer(e.result);
Your code explicitly says to stop playing after the very first chunk:
sb.addEventListener("updateend", function () {
VSRC.endOfStream();
});
Here is what you really need to do:
sb.addEventListener("updateend", function () {
if (!sbx.updating && segmentsQueue.length > 0) {
sbx.appendBuffer(segmentsQueue.shift());
}
});
I am in the process of replacing RecordRTC with the built in MediaRecorder for recording audio in Chrome. The recorded audio is then played in the program with audio api. I am having trouble getting the audio.duration property to work. It says
If the video (audio) is streamed and has no predefined length, "Inf" (Infinity) is returned.
With RecordRTC, I had to use ffmpeg_asm.js to convert the audio from wav to ogg. My guess is somewhere in the process RecordRTC sets the predefined audio length. Is there any way to set the predefined length using MediaRecorder?
This is a chrome bug.
FF does expose the duration of the recorded media, and if you do set the currentTimeof the recorded media to more than its actual duration, then the property is available in chrome...
var recorder,
chunks = [],
ctx = new AudioContext(),
aud = document.getElementById('aud');
function exportAudio() {
var blob = new Blob(chunks);
aud.src = URL.createObjectURL(new Blob(chunks));
aud.onloadedmetadata = function() {
// it should already be available here
log.textContent = ' duration: ' + aud.duration;
// handle chrome's bug
if (aud.duration === Infinity) {
// set it to bigger than the actual duration
aud.currentTime = 1e101;
aud.ontimeupdate = function() {
this.ontimeupdate = () => {
return;
}
log.textContent += ' after workaround: ' + aud.duration;
aud.currentTime = 0;
}
}
}
}
function getData() {
var request = new XMLHttpRequest();
request.open('GET', 'https://upload.wikimedia.org/wikipedia/commons/4/4b/011229beowulf_grendel.ogg', true);
request.responseType = 'arraybuffer';
request.onload = decodeAudio;
request.send();
}
function decodeAudio(evt) {
var audioData = this.response;
ctx.decodeAudioData(audioData, startRecording);
}
function startRecording(buffer) {
var source = ctx.createBufferSource();
source.buffer = buffer;
var dest = ctx.createMediaStreamDestination();
source.connect(dest);
recorder = new MediaRecorder(dest.stream);
recorder.ondataavailable = saveChunks;
recorder.onstop = exportAudio;
source.start(0);
recorder.start();
log.innerHTML = 'recording...'
// record only 5 seconds
setTimeout(function() {
recorder.stop();
}, 5000);
}
function saveChunks(evt) {
if (evt.data.size > 0) {
chunks.push(evt.data);
}
}
// we need user-activation
document.getElementById('button').onclick = function(evt){
getData();
this.remove();
}
<button id="button">start</button>
<audio id="aud" controls></audio><span id="log"></span>
So the advice here would be to star the bug report so that chromium's team takes some time to fix it, even if this workaround can do the trick...
Thanks to #Kaiido for identifying bug and offering the working fix.
I prepared an npm package called get-blob-duration that you can install to get a nice Promise-wrapped function to do the dirty work.
Usage is as follows:
// Returns Promise<Number>
getBlobDuration(blob).then(function(duration) {
console.log(duration + ' seconds');
});
Or ECMAScript 6:
// yada yada async
const duration = await getBlobDuration(blob)
console.log(duration + ' seconds')
A bug in Chrome, detected in 2016, but still open today (March 2019), is the root cause behind this behavior. Under certain scenarios audioElement.duration will return Infinity.
Chrome Bug information here and here
The following code provides a workaround to avoid the bug.
Usage : Create your audioElement, and call this function a single time, providing a reference of your audioElement. When the returned promise resolves, the audioElement.duration property should contain the right value. ( It also fixes the same problem with videoElements )
/**
* calculateMediaDuration()
* Force media element duration calculation.
* Returns a promise, that resolves when duration is calculated
**/
function calculateMediaDuration(media){
return new Promise( (resolve,reject)=>{
media.onloadedmetadata = function(){
// set the mediaElement.currentTime to a high value beyond its real duration
media.currentTime = Number.MAX_SAFE_INTEGER;
// listen to time position change
media.ontimeupdate = function(){
media.ontimeupdate = function(){};
// setting player currentTime back to 0 can be buggy too, set it first to .1 sec
media.currentTime = 0.1;
media.currentTime = 0;
// media.duration should now have its correct value, return it...
resolve(media.duration);
}
}
});
}
// USAGE EXAMPLE :
calculateMediaDuration( yourAudioElement ).then( ()=>{
console.log( yourAudioElement.duration )
});
Thanks #colxi for the actual solution, I've added some validation steps (As the solution was working fine but had problems with long audio files).
It took me like 4 hours to get it to work with long audio files turns out validation was the fix
function fixInfinity(media) {
return new Promise((resolve, reject) => {
//Wait for media to load metadata
media.onloadedmetadata = () => {
//Changes the current time to update ontimeupdate
media.currentTime = Number.MAX_SAFE_INTEGER;
//Check if its infinite NaN or undefined
if (ifNull(media)) {
media.ontimeupdate = () => {
//If it is not null resolve the promise and send the duration
if (!ifNull(media)) {
//If it is not null resolve the promise and send the duration
resolve(media.duration);
}
//Check if its infinite NaN or undefined //The second ontime update is a fallback if the first one fails
media.ontimeupdate = () => {
if (!ifNull(media)) {
resolve(media.duration);
}
};
};
} else {
//If media duration was never infinity return it
resolve(media.duration);
}
};
});
}
//Check if null
function ifNull(media) {
if (media.duration === Infinity || media.duration === NaN || media.duration === undefined) {
return true;
} else {
return false;
}
}
//USAGE EXAMPLE
//Get audio player on html
const AudioPlayer = document.getElementById('audio');
const getInfinity = async () => {
//Await for promise
await fixInfinity(AudioPlayer).then(val => {
//Reset audio current time
AudioPlayer.currentTime = 0;
//Log duration
console.log(val)
})
}
I wrapped the webm-duration-fix package to solve the webm length problem, which can be used in nodejs and web browsers to support video files over 2GB with not too much memory usage.
Usage is as follows:
import fixWebmDuration from 'webm-duration-fix';
const mimeType = 'video/webm\;codecs=vp9';
const blobSlice: BlobPart[] = [];
mediaRecorder = new MediaRecorder(stream, {
mimeType
});
mediaRecorder.ondataavailable = (event: BlobEvent) => {
blobSlice.push(event.data);
}
mediaRecorder.onstop = async () => {
// fix blob, support fix webm file larger than 2GB
const fixBlob = await fixWebmDuration(new Blob([...blobSlice], { type: mimeType }));
// to write locally, it is recommended to use fs.createWriteStream to reduce memory usage
const fileWriteStream = fs.createWriteStream(inputPath);
const blobReadstream = fixBlob.stream();
const blobReader = blobReadstream.getReader();
while (true) {
let { done, value } = await blobReader.read();
if (done) {
console.log('write done.');
fileWriteStream.close();
break;
}
fileWriteStream.write(value);
value = null;
}
blobSlice = [];
};
//If you want to modify the video file completely, you can use this package "webmFixDuration", Other methods are applied at the display level only on the video tag With this method, the complete video file is modified
webmFixDuration github example
mediaRecorder.onstop = async () => {
const duration = Date.now() - startTime;
const buggyBlob = new Blob(mediaParts, { type: 'video/webm' });
const fixedBlob = await webmFixDuration(buggyBlob, duration);
displayResult(fixedBlob);
};