HTML5 Video: Streaming Video with Blob URLs - javascript

I have an array of Blobs (binary data, really -- I can express it however is most efficient. I'm using Blobs for now but maybe a Uint8Array or something would be better). Each Blob contains 1 second of audio/video data. Every second a new Blob is generated and appended to my array. So the code roughly looks like so:
var arrayOfBlobs = [];
setInterval(function() {
arrayOfBlobs.append(nextChunk());
}, 1000);
My goal is to stream this audio/video data to an HTML5 element. I know that a Blob URL can be generated and played like so:
var src = URL.createObjectURL(arrayOfBlobs[0]);
var video = document.getElementsByTagName("video")[0];
video.src = src;
Of course this only plays the first 1 second of video. I also assume I can trivially concatenate all of the Blobs currently in my array somehow to play more than one second:
// Something like this (untested)
var concatenatedBlob = new Blob(arrayOfBlobs);
var src = ...
However this will still eventually run out of data. As Blobs are immutable, I don't know how to keep appending data as it's received.
I'm certain this should be possible because YouTube and many other video streaming services utilize Blob URLs for video playback. How do they do it?

Solution
After some significant Googling I managed to find the missing piece to the puzzle: MediaSource
Effectively the process goes like this:
Create a MediaSource
Create an object URL from the MediaSource
Set the video's src to the object URL
On the sourceopen event, create a SourceBuffer
Use SourceBuffer.appendBuffer() to add all of your chunks to the video
This way you can keep adding new bits of video without changing the object URL.
Caveats
The SourceBuffer object is very picky about codecs. These have to be declared, and must be exact, or it won't work
You can only append one blob of video data to the SourceBuffer at a time, and you can't append a second blob until the first one has finished (asynchronously) processing
If you append too much data to the SourceBuffer without calling .remove() then you'll eventually run out of RAM and the video will stop playing. I hit this limit around 1 hour on my laptop
Example Code
Depending on your setup, some of this may be unnecessary (particularly the part where we build a queue of video data before we have a SourceBuffer then slowly append our queue using updateend). If you are able to wait until the SourceBuffer has been created to start grabbing video data, your code will look much nicer.
<html>
<head>
</head>
<body>
<video id="video"></video>
<script>
// As before, I'm regularly grabbing blobs of video data
// The implementation of "nextChunk" could be various things:
// - reading from a MediaRecorder
// - reading from an XMLHttpRequest
// - reading from a local webcam
// - generating the files on the fly in JavaScript
// - etc
var arrayOfBlobs = [];
setInterval(function() {
arrayOfBlobs.append(nextChunk());
// NEW: Try to flush our queue of video data to the video element
appendToSourceBuffer();
}, 1000);
// 1. Create a `MediaSource`
var mediaSource = new MediaSource();
// 2. Create an object URL from the `MediaSource`
var url = URL.createObjectURL(mediaSource);
// 3. Set the video's `src` to the object URL
var video = document.getElementById("video");
video.src = url;
// 4. On the `sourceopen` event, create a `SourceBuffer`
var sourceBuffer = null;
mediaSource.addEventListener("sourceopen", function()
{
// NOTE: Browsers are VERY picky about the codec being EXACTLY
// right here. Make sure you know which codecs you're using!
sourceBuffer = mediaSource.addSourceBuffer("video/webm; codecs=\"opus,vp8\"");
// If we requested any video data prior to setting up the SourceBuffer,
// we want to make sure we only append one blob at a time
sourceBuffer.addEventListener("updateend", appendToSourceBuffer);
});
// 5. Use `SourceBuffer.appendBuffer()` to add all of your chunks to the video
function appendToSourceBuffer()
{
if (
mediaSource.readyState === "open" &&
sourceBuffer &&
sourceBuffer.updating === false
)
{
sourceBuffer.appendBuffer(arrayOfBlobs.shift());
}
// Limit the total buffer size to 20 minutes
// This way we don't run out of RAM
if (
video.buffered.length &&
video.buffered.end(0) - video.buffered.start(0) > 1200
)
{
sourceBuffer.remove(0, video.buffered.end(0) - 1200)
}
}
</script>
</body>
</html>
As an added bonus this automatically gives you DVR functionality for live streams, because you're retaining 20 minutes of video data in your buffer (you can seek by simply using video.currentTime = ...)

Adding to the previous answer...
make sure to add sourceBuffer.mode = 'sequence' in the MediaSource.onopen event handler to ensure the data is appended based on the order it is received. The default value is segments, which buffers until the next 'expected' timeframe is loaded.
Additionally, make sure that you are not sending any packets with a data.size === 0, and make sure that there is 'stack' by clearing the stack on the broadcasting side, unless you are wanting to record it as an entire video, in which case just make sure the size of the broadcast video is small enough, and that your internet speed is fast. The smaller and lower the resolution the more likely you can keep a realtime connection with a client, ie a video call.
For iOS the broadcast needs to made from a iOS/macOS application, and be in mp4 format. The video chunk gets saved to the app's cache and then removed once it is sent to the server. A client can connect to the stream using either a web browser or app across nearly any device.

Related

How to cache a webaudio object properly?

I'm developing a game using javascript and other web technologies. In it, there's a game mode that is basically a tower defense, in which multiple objects may need to make use of the same audio file(.ogg) at the same time. Loading a file and creating a new webaudio for each one of those lags it too much, even if I attempt to stream it instead of a simple sync read, and if I create and save a webaudio in a variable to use multiple times, each time its playing and there is a new request to play said audio, the one that was playing will stop to allow for the new one to play(so, with enough of those, nothing plays at all).
With those issues, I decided to make copies of said webaudio object each time it was gonna be played, but its not only slow to do so, but also creates a minor memory leak(at least the way I did it).
How can I properly cache a webaudio for re-use? Consider that I'm pretty sure I'll need a new one each time because each audio has a position, and thus each of them will play differently, based on player position in relation to object that is playing the audio
You tagged your question with web-audio-api, but from the body of this question, it seems you are using an HTMLMediaElement <audio> instead of the Web Audio API.
So I'll invite you to do the transition to that Web Audio API.
From there you'll be able to decode once your audio file, keep only once the decoded data as an AudioBuffer, and create many readers that will all hook to that one and only AudioBuffer, without eating any more memory.
const btn = document.querySelector("button")
const context = new AudioContext();
// a GainNode to control the output volume of our audio
const volumeNode = context.createGain();
volumeNode.gain.value = 0.5; // from 0 to 1
volumeNode.connect(context.destination);
fetch("https://dl.dropboxusercontent.com/s/agepbh2agnduknz/camera.mp3")
// get the resource as an ArrayBuffer
.then((resp) => resp.arrayBuffer())
// decode the Audio data from this resource
.then((buffer) => context.decodeAudioData(buffer))
// now we have our AudioBuffer object, ready to be played
.then((audioBuffer) => {
btn.onclick = (evt) => {
// allowing an AudioContext to make noise
// must be required from an user-gesture
if (context.status === "suspended") {
context.resume();
}
// a very light player object
const source = context.createBufferSource();
// a simple pointer to the big AudioBuffer (no copy)
source.buffer = audioBuffer;
// connect to our volume node, itself connected to audio output
source.connect(volumeNode);
// start playing now
source.start(0);
};
// now you can spam the button!
btn.disabled = false;
})
.catch(console.error);
<button disabled>play</button>

Javascript - imageCapture.takePhoto() function to take pictures

I am building an web application for my experiment purpose. The aim here is to capture ~15-20 frames per second from the webcam and send it to the server. Once the frame is captured, it is converted to base64 and added to an array. After certain time, it is sent back to the server. Currently I am using imageCapture.takePhoto() to achieve this functionality. I get blob as a result which is then changed to base64. The application runs for ~5 seconds and during this time, frames are captured and sent to the server.
What are the more efficient ways to capture the frames through webcam to achieve this?
You can capture still images directly from the <video> element used to preview the stream from .getUserMedia(). You set up that preview, of course, by doing this sort of thing (pseudocode).
const stream = await navigator.getUserMedia(options)
const videoElement = document.querySelector('video#whateverId')
videoElement.srcObject = stream
videoElement.play()
Next, make yourself a canvas object and a context for it. It doesn't have to be visible.
const scratchCanvas = document.createElement('canvas')
scratchCanvas.width = video.videoWidth
scratchCanvas.height = video.videoHeight
const scratchContext = scratchCanvas.getContext('2d')
Now you can make yourself a function like this.
function stillCapture(video, canvas, context) {
context.drawImage( video, 0, 0, video.videoWidth, video.videoHeight)
canvas.toBlob(
function (jpegBlob) {
/* do something useful with the Blob containing jpeg */
}, 'image/jpeg')
}
A Blob containing a jpeg version of a still capture shows up in the callback. Do with it whatever you need to do.
Then, invoke that function every so often. For example, to get approximately 15fps, do this.
const howOften = 1000.0 / 15.0
setInterval (stillCapture, howOften, videoElement, scratchCanvas, scratchContext)
All this saves you the extra work of using .takePhoto().

Is there a way to play fragmented mp4 at random chunk?

Say there is a video rendered into fragmented mp4 consisting of a number of chunks/fragments. The question is: after the init was loaded to MediaSource buffer, is there a way to play a random fragment?
A small research of the thing's specifications gave little understanding of the problem. Fragments seem to have a kind of an order IDs hardcoded in them. It's a fairly reasonable idea to include it in a file in case of unreliable connection and asynchronous fetching while streaming content, but is there a way to parse a chunk and change its ID using JavaScript?
The code below is just playing 2 minute video split into 12 fragments based on user's time and is supposed to be able to start at any chunk (not only the first) and then to repeat.
let mediaSource = new MediaSource
document.getElementById('video').src = URL.createObjectURL(mediaSource)
mediaSource.addEventListener('sourceopen', () => {
let buffer = mediaSource.addSourceBuffer('video/mp4; codecs="avc1.42E01E, mp4a.40.2"')
let loadToBuffer = url => {
let xhr = new XMLHttpRequest
xhr.responseType = 'arraybuffer'
xhr.open('GET', url, true)
xhr.addEventListener('loadend', () => buffer.appendBuffer(new Uint8Array(xhr.response)))
xhr.send()
}
loadToBuffer('video/init.mp4')
setInterval(() => loadToBuffer('video/video' + (Math.floor(new Date().getTime() / 1000 / 10) % 12) + '.m4s'), 10 * 1000)
})
When you load fragments in a sourceBuffer, those fragments include presentation time stamps (PTS), which put them in the correct playback order in the buffer.
You can either modify the fragments themselves, for which you have to parse the atoms and change the PTS (and possibly other) values, or change the video element currentTime property, so you can play the video at that was correctly buffered.
You can inspect the buffered property on the video element object to check the range of time that has been loaded.

Updating objectURL in JavaScript

Suppose I am receiving data from a video chat and I need to add the data to the video element in the HTML page.
So here is the code:
var payload = []; // This array keeps updating, since it is getting the data from the network using media stream.
var remoteVideo = document.getElementById('remoteVideo');
var buffer = new Blob([], { type: "video/x-matroska;codecs=avc1,opus" });
var url = URL.createObjectURL(buffer);
remoteVideo.src = url;
Now, I am getting data in the buffer. How do I update the url i have created instead of creating a new one again to view the video?
I think you might not need to update the url at all using MediaSource,the process goes like this:
Create a MediaSource Object.
Create an object URL from the MediaSource using createObjectURl
Set the video's src to the object URL
listen tosourceopen event and when it occurs, create a SourceBuffer instance.
Use SourceBuffer.appendBuffer() to add all of your chunks to the video.
But pay close attention to MediaSource limitations and considerations.
EDIT:
I found this Answer which explains the process described above more precisely and also points out the considerations you should take, and also an example.

Playing audio broken into multiple files in webpage

I desire to play an audio-book in my web-page. The audio book is a .zip file, which contains multiple .mp3 files, having one for each chapter of the book. The run time of all the files is several hours, and the their cumulative size is 60MB. The .zip is stored server-side (Express.js)
How can I play each file in succession in the client (in an <audio> element for instance), so that the audio-book plays smoothly, as if 1 file?
Do I need to use a MediaStream object? If so, how?
-Thanks
I'd take a look at this answer on another Stack Overflow question however I have made some modifications in order to match your question:
var audioFileURLs= [];
function preloadAudio(url) {
var audio = new Audio();
// once this file loads, it will call loadedAudio()
// the file will be kept by the browser as cache
audio.addEventListener('canplaythrough', loadedAudio, false);
audio.src = url;
}
var loaded = 0;
function loadedAudio() {
// this will be called every time an audio file is loaded
// we keep track of the loaded files vs the requested files
loaded++;
if (loaded == audioFileURLs.length){
// all have loaded
init();
}
}
var player = document.getElementById('player');
function play(index) {
player.src = audioFiles[index];
player.play();
}
function init() {
// do your stuff here, audio has been loaded
// for example, play all files one after the other
var i = 0;
// once the player ends, play the next one
player.onended = function() {
i++;
if (i >= audioFiles.length) {
// end
return;
}
play(i);
};
// play the first file
play(i);
}
// call node/express server to get a list of links we can hit to retrieve each audio file
fetch('/getAudioUrls/#BookNameOrIdHere#')
.then(r => r.json())
.then(arrayOfURLs => {
audioFileURLs = arrayOfURLs
arrayOfURLs.map(url => preloadAudio(URL))
})
And then just have an audio element on the screen with the id of "player" like <audio id="player"></audio>
With this answer though, the arrayOfURLs array must contain URLs to an API on your server that will open the zip file and return the specified mp3 data. You may also just want to take this answer as a general reference, and not a complete solution because there is optimization to be done. You should probably only load the first audio file at first, and 5 minutes or so before the first file ends you may want to start pre-loading the next and then repeat this process for the entire thing... That all will be up to you but this should hopefully put you on your feet.
You may also run into an issue with the audio element though because it will only show the length of the current audio segment it is on, and not the full length of the audiobook. I would choose to believe this zip file has the book separated by chapter correct? If so you could create a chapter selector, that pretty much allows you to jump to a specific chapter aka getAudioUrls URL.
I hope this helps!
One other note for you... reading your comment on a potential answer down below, you could combine all the audio files into one using some sort of node module (audioconcat is one I found after a quick google search) and return that one file to the client. However, I would not personally take this route because the entire audiobook will be in the server's memory while it combines them, and until it returns it to the client. This could cause some memory issues down the road, so I would avoid it if I could. However, I will admit that this option could be potentially nice because the full length of the audiobook will display in the audio elements timeline.
The best option perhaps is to store the books full length and chapter lengths in a details.json file in the zip file and send that to the client in the first API call along with the URLs to each audio file. This would enable you to build a nice UI.
The only options I can think of is to use either use a javascript mp3 decoder (or compiled a C decoder to asm.js/wasm) and use the audio APIs. Or wrap the mp3 in an mp4 using something like mux.js and use media source extensions to playback.
maybe this will help you
<audio controls="controls">
<source src="track.ogg" type="audio/ogg" />
<source src="track.mp3" type="audio/mpeg" />
Your browser does not support the audio element.
</audio>

Categories