How to load audio file into AudioContext like stream? - javascript

For example i want to load 100MB mp3 file into AudioContext, and i can do that with using XMLHttpRequest.
But with this solution i need to load all file and only then i can play it, because onprogress method don't return data.
xhr.onprogress = function(e) {
console.log(this.response); //return null
};
Also i tried to do that with fetch method, but this way have same problem.
fetch(url).then((data) => {
console.log(data); //return some ReadableStream in body,
//but i can't find way to use that
});
There is any way to load audio file like stream in client JavaScript?

You need to handle the ajax response in a streaming way.
there is no standard way to do this until fetch & ReadableStream have properly been implemented across all the browsers
I'll show you the most correct way according to the new standard how you should deal with streaming a ajax response
// only works in Blink right now
fetch(url).then(res => {
let reader = res.body.getReader()
let pump = () => {
reader.read().then(({value, done}) => {
value // chunk of data (push chunk to audio context)
if(!done) pump()
})
}
pump()
})
Firefox is working on implementing streams but until then you need to use xhr and moz-chunked-arraybuffer
IE/edge has ms-stream that you can use but it's more complicated

How can I send value.buffer to AudioContext?
This only plays the first chunk and it doesn't work correctly.
const context = new AudioContext()
const source = context.createBufferSource()
source.connect(context.destination)
const reader = response.body.getReader()
while (true) {
await reader.read()
const { done, value } = await reader.read()
if (done) {
break
}
const buffer = await context.decodeAudioData(value.buffer)
source.buffer = buffer
source.start(startTime)
}

Related

await fetch(file_url) returns sometimes doesn't return the full file contents

I have the following javascript code to fetch and process the contents of the .csv file
async function fetchCsv() {
const response = await fetch("levels.csv");
const reader = response.body.getReader();
const result = await reader.read();
const decoder = new TextDecoder("utf-8");
const csv = await decoder.decode(result.value);
return csv;
}
useEffect(() => {
fetchCsv().then((csv) => {
// process csv
(...)
When running this code 99% of the time the csv variable contains the correct contents of the file, but in rare cases the csv variable is only truncated part of the actual file.
What could be the reason and how to improve the code to handle that?
It's in a React App if that's relevant.
Extra info:
I have verified that when the problem occurs the network response for the levels.csv file is a proper response (200 and full 38kb are returned)
What you get when calling response.body.getReader() is a ReadableStreamDefaultReader object.
Calling its .read() method will return a Promise that will resolve with either the full content of the response body, in case the request was honored fast enough and the body size isn't too big (apparently 256MB in Firefox), or with just one chunk of the response body.
This allows you to handle the response as a stream, before it's entirely fetched.
If you wish to process this stream as text, you could either use a TextDecoderStream, which finally got support in all major browsers:
const response = await fetch("levels.csv");
const textStream = response.body.pipeThrough(new TextDecoderStream());
// now you can handle each chunk as text from textStream.getReader();
// or pipe it in yet another TransformStream
or in more old-school style, you could use the { stream: true } option of the TextDecoder#decode() method and handle each chunk one by one in there:
const response = await fetch("levels.csv");
const decoder = new TextDecoder();
const reader = response.body.getReader();
while (true) {
const {value, done} = await reader.read();
if (value) {
csv_chunks.push(decoder.decode(value, {stream: true}));
// do something with all the chunks we have so far
}
if (done) {
break;
}
}
But maybe you don't want to handle this response as a stream at all, in which case it might very well be enough for you to ask the browser to first fetch the whole response body before it itself decodes it as text. For this, if you need to decode the text as UTF-8, you'd use the Response#text() method:
const response = await fetch("levels.csv");
if (!response.ok) { // don't forget to handle possible network errors
throw new Error("NetworkError");
}
return response.text();
And if you need to handle an other encoding, then first consume the response as an ArrayBuffer then decode it to text:
const response = await fetch("levels.csv");
if (!response.ok) { // don't forget to handle possible network errors
throw new Error("NetworkError");
}
const buf = await response.arrayBuffer();
const decoder = new TextDecoder(encoding);
return decoder.decode(buf);

How do I stream an audio file to nodejs while it's still being recorded?

I am using MediaStream Recording API to record audio in the browser, like this (courtesy https://github.com/bryanjenningz/record-audio):
const recordAudio = () =>
new Promise(async resolve => {
// This wants to be secure. It will throw unless served from https:// or localhost.
const stream = await navigator.mediaDevices.getUserMedia({ audio: true });
const mediaRecorder = new MediaRecorder(stream);
let audioChunks = [];
mediaRecorder.addEventListener('dataavailable', event => {
audioChunks.push(event.data);
console.log("Got audioChunk!!", event.data.size, event.data.type);
// mediaRecorder.requestData()
});
const start = () => {
audioChunks = [];
mediaRecorder.start(1000); // milliseconds per recorded chunk
};
const stop = () =>
new Promise(resolve => {
mediaRecorder.addEventListener('stop', () => {
const audioBlob = new Blob(audioChunks, { type: 'audio/mpeg' });
const audioUrl = URL.createObjectURL(audioBlob);
const audio = new Audio(audioUrl);
const play = () => audio.play();
resolve({ audioChunks, audioBlob, audioUrl, play });
});
mediaRecorder.stop();
});
resolve({ start, stop });
});
I would like to modify this code to start streaming to nodejs while it's still recording. I understand the header won't be complete until it finished the recording. I can either account for that on nodejs, or perhaps I can live with invalid headers, because I'll be feeding this into ffmpeg on nodejs anyway. How do I do this?
The trick is when you start your recorder, start it like this mediaRecorder.start(timeSlice), where timeSlice is the number of milliseconds the browser waits before emitting a dataavailable event with a blob of data.
Then, in your event handler for dataavailable you call the server:
mediaRecorder.addEventListener('dataavailable', event => {
myHTTPLibrary.post(event.data);
});
That's the general solution. It's not possible to insert an example here, because a code sandbox can't ask you to use your webcam, but I've created one here. It simply sends your data to Request Bin, where you can watch the data stream in.
There are some other things you'll need to think about if you want to stitch the video or audio back together. The blog post touches on that.

JS MediaRecorder API exports a non seekable WebM file

I am working on a video editor, and the video is rendered using the canvas, so I use the JS MediaRecorder API, but I have run into an odd problem, where, because the MediaRecorder API is primarily designed for live streams, my exported WebM file doesn't show how long it is until it's done, which is kinda annoying.
This is the code I am using:
function exportVideo() {
const stream = preview.captureStream();
const dest = audioContext.createMediaStreamDestination();
const sources = []
.concat(...layers.map((layer) => layer.addAudioTracksTo(dest)))
.filter((source) => source);
// exporting doesn't work if there's no audio and it adds the tracks
if (sources.length) {
dest.stream.getAudioTracks().forEach((track) => stream.addTrack(track));
}
const recorder = new MediaRecorder(stream, {
mimeType: usingExportType,
videoBitsPerSecond: exportBitrate * 1000000,
});
let download = true;
recorder.addEventListener("dataavailable", (e) => {
const newVideo = document.createElement("video");
exportedURL = URL.createObjectURL(e.data);
if (download) {
const saveLink = document.createElement("a");
saveLink.href = exportedURL;
saveLink.download = "video-export.webm";
document.body.appendChild(saveLink);
saveLink.click();
document.body.removeChild(saveLink);
}
});
previewTimeAt(0, false);
return new Promise((res) => {
recorder.start();
audioContext.resume().then(() => play(res));
}).then((successful) => {
download = successful;
recorder.stop();
sources.forEach((source) => {
source.disconnect(dest);
});
});
}
And if this is too vague, please tell me what is vague about it.
Thanks!
EDIT: Narrowed down the problem, this is a chrome bug, see https://bugs.chromium.org/p/chromium/issues/detail?id=642012. I discovered a library called https://github.com/legokichi/ts-ebml that may be able to make the webm seekable, but unfortunately, this is a javascript project, and I ain't setting up Typescript.
JS MediaRecorder API exports a non seekable WebM file
Yes, it does. It's in the nature of streaming.
In order to make that sort of stream seekable you need to post process it. There's a npm embl library pre-typescript if you want to attempt it.

Read blob contents into an existing SharedArrayBuffer

I'm trying to find the most efficient way to read the contents of a Blob into an existing SharedArrayBuffer originating is a worker waiting for the buffer to be poplated. In my case, I can guarantee that the SharedArrayBuffer is at least long enough to hold the entire contents of the Blob. The best approach I've come up with is:
// Assume 'blob' is the blob we are reading
// and 'buffer' is the SharedArrayBuffer.
const fr = new FileReader();
fr.addEventListener('load', e =>
new Uint8Array(buffer).set(new Uint8Array(e.target.result)));
fr.readAsArrayBuffer(blob);
This seems inefficient, especially if the blob being read is relatively large.
Blob is not a Transferable object. Also, there is no .readAsSharedArrayBuffer method available on FileReader.
However, if you only need to read a Blob from multiple workers simultaneously, I believe you can achieve this with URL.createObjectURL() and fetch, although I have not tested this with multiple workers:
// === main thread ===
let objectUrl = URL.createObjectURL(blob);
worker1.postMessage(objectUrl);
worker2.postMessage(objectUrl);
// === worker 1 & 2 ===
self.onmessage = msg => {
fetch(msg.data)
.then(res => res.blob())
.then(blob => {
doSomethingWithBlob(blob);
});
};
Otherwise, as far as I can tell, there really isn't an efficient way to load data from a file into a SharedArrayBuffer.
I also provide a method here for transferring chunks of a blob from main thread to a single worker. For my use case, the files are too big to read the entire contents into a single array buffer anyway (shared or not), so I use .slice to deal in chunks. Something like this will let you deliver tons of data to a single worker in a stream-like fashion via multiple .postMessage calls using the Transferable ArrayBuffer:
// === main thread ===
let eof = false;
let nextBuffer = null;
let workerReady = true;
let read = 0;
function nextChunk() {
let end = read + chunkSize;
if(end >= file.length) {
end = file.length;
eof = true;
}
let slice = file.slice(read, end);
read = end;
fr.readAsArrayBuffer(slice);
}
fr.onload = event => {
let ab = event.target.result;
if(workerReady) {
worker.postMessage(ab, [ab]);
workerReady = false;
if(!eof) nextChunk();
}
else {
nextBuffer = ab;
}
};
// wait until the worker finished the last chunk
// ... otherwise we'll flood main thread's heap
worker.onmessage = msg => {
if(nextBuffer) {
worker.postMessage(nextBuffer, [nextBuffer]);
nextBuffer = null;
}
else if(!eof && msg.ready) {
nextChunk();
}
};
nextChunk();
// === worker ===
self.onmessage = msg => {
let ab = msg.data;
// ... do stuff with data ...
self.postMessage({ready:true});
};
This will read a chunk of data into an ArrayBuffer in the main thread, transfer that to the worker, and then read the next chunk into memory while waiting for worker to process the previous chunk. This basically ensures that both threads stay busy the whole time.

How to retrieve a MediaStream from a Blob url?

It was possible to get an URL using window.URL.createObjectURL() from a stream like in below code.
navigator.getUserMedia({ video: true, audio: true }, function (localMediaStream) {
var video = document.querySelector('video');
video.src = window.URL.createObjectURL(localMediaStream);
video.onloadedmetadata = function (e) {
// Do something with the video here.
};
},
function (err) {
console.log("The following error occured: " + err);
}
);
Problem is now I have a blob URL like:
blob:http%3A//localhost%3A1560/f43bed15-da6c-4ff1-b73c-5640ed94e8ee
Is there a way to retrieve the MediaStream object from that?
Note:
URL.createObjectURL(MediaStream) has been deprecated.
Do not use it in code anymore, it will throw in any recent browsers.
The premise of the question is still valid though.
There is no built in way to retrieve the original object a blob URL points to.
With Blobs, we can still fetch this blob URL and we'll get a copy of the original Blob.
const blob = new Blob(['hello']);
const url = URL.createObjectURL(blob);
fetch(url)
.then(r => r.blob())
.then(async (copy) => {
console.log('same Blobs?', copy === blob);
const blob_arr = new Uint8Array(await new Response(blob).arrayBuffer());
const copy_arr = new Uint8Array(await new Response(copy).arrayBuffer());
console.log("same content?", JSON.stringify(blob_arr) === JSON.stringify(copy_arr))
console.log(JSON.stringify(copy_arr));
})
With other objects though, this won't work...
const source = new MediaSource();
const url = URL.createObjectURL(source);
fetch(url)
.then(r => r.blob())
.then(console.log)
.catch(console.error);
The only way then is to keep track of your original objects.
To do so, we can come up with simple wrappers around createObjectURL and revokeObjectURL to update a dictionary of objects accessible by URL:
(() => {
// overrides URL methods to be able to retrieve the original blobs later on
const old_create = URL.createObjectURL;
const old_revoke = URL.revokeObjectURL;
Object.defineProperty(URL, 'createObjectURL', {
get: () => storeAndCreate
});
Object.defineProperty(URL, 'revokeObjectURL', {
get: () => forgetAndRevoke
});
Object.defineProperty(URL, 'getFromObjectURL', {
get: () => getBlob
});
const dict = {};
function storeAndCreate(blob) {
var url = old_create(blob); // let it throw if it has to
dict[url] = blob;
return url
}
function forgetAndRevoke(url) {
old_revoke(url);
// some checks just because it's what the question titel asks for, and well to avoid deleting bad things
try {
if(new URL(url).protocol === 'blob:')
delete dict[url];
}catch(e){} // avoided deleting some bad thing ;)
}
function getBlob(url) {
return dict[url];
}
})();
// a few example uses
// first a simple Blob
test(new Blob(['foo bar']));
// A more complicated MediaSource
test(new MediaSource());
function test(original) {
const url = URL.createObjectURL(original);
const retrieved = URL.getFromObjectURL(url);
console.log('retrieved: ', retrieved);
console.log('is same object: ', retrieved === original);
URL.revokeObjectURL(url);
}
In case you are using angular2, you can use the DOMSanitizer provided in the platform-browser-package:
import { DomSanitizer } from '#angular/platform-browser';
constructor(
private sanitizer: DomSanitizer) {
}
and then use your stream like the following:
//your code comes here...
video.src = this.sanitizer.bypassSecurityTrustUrl(window.URL.createObjectURL(stream));
This should only
video.src is NOT video.srcObject
And yes they will conflict ;) !
video.src takes source URL
video.srcObject takes source OBJECT (currently as of 2019 only MediaStream is safely supported, maybe in the future you could put the Blob directly here, but not now...)
So it depends on what you really want to do:
A) Display what is currently being recorded
You must have MediaStream object available (which you do) and just put it into video.srcObject
navigator.getUserMedia({ video: true, audio: true }, function (localMediaStream) {
var video = document.querySelector('video');
video.src = ''; // just to be sure src does not conflict with us
video.srcObject = localMediaStream;
}
B) Display existing video / just recorded video
video.srcObject = null; // make sure srcObject is empty and does not overlay our src
video.src = window.URL.createObjectURL(THE_BLOB_OBJECT);
THE_BLOB_OBJECT - you either already have one created through File API, or usually if you have some kind of recorder, let's assume in recorder variable, usually there is getBlob() or something similar available like recorder.getBlob() I strongly recommend you use some existing recorder library for this, but to be complete there is an official MediaRecorder API - https://developer.mozilla.org/en-US/docs/Web/API/MediaRecorder
So you see you've just combined 2 things together, you just need to separate them and make sure they don't conflict :)

Categories