Say there is a video rendered into fragmented mp4 consisting of a number of chunks/fragments. The question is: after the init was loaded to MediaSource buffer, is there a way to play a random fragment?
A small research of the thing's specifications gave little understanding of the problem. Fragments seem to have a kind of an order IDs hardcoded in them. It's a fairly reasonable idea to include it in a file in case of unreliable connection and asynchronous fetching while streaming content, but is there a way to parse a chunk and change its ID using JavaScript?
The code below is just playing 2 minute video split into 12 fragments based on user's time and is supposed to be able to start at any chunk (not only the first) and then to repeat.
let mediaSource = new MediaSource
document.getElementById('video').src = URL.createObjectURL(mediaSource)
mediaSource.addEventListener('sourceopen', () => {
let buffer = mediaSource.addSourceBuffer('video/mp4; codecs="avc1.42E01E, mp4a.40.2"')
let loadToBuffer = url => {
let xhr = new XMLHttpRequest
xhr.responseType = 'arraybuffer'
xhr.open('GET', url, true)
xhr.addEventListener('loadend', () => buffer.appendBuffer(new Uint8Array(xhr.response)))
xhr.send()
}
loadToBuffer('video/init.mp4')
setInterval(() => loadToBuffer('video/video' + (Math.floor(new Date().getTime() / 1000 / 10) % 12) + '.m4s'), 10 * 1000)
})
When you load fragments in a sourceBuffer, those fragments include presentation time stamps (PTS), which put them in the correct playback order in the buffer.
You can either modify the fragments themselves, for which you have to parse the atoms and change the PTS (and possibly other) values, or change the video element currentTime property, so you can play the video at that was correctly buffered.
You can inspect the buffered property on the video element object to check the range of time that has been loaded.
Related
I am building an web application for my experiment purpose. The aim here is to capture ~15-20 frames per second from the webcam and send it to the server. Once the frame is captured, it is converted to base64 and added to an array. After certain time, it is sent back to the server. Currently I am using imageCapture.takePhoto() to achieve this functionality. I get blob as a result which is then changed to base64. The application runs for ~5 seconds and during this time, frames are captured and sent to the server.
What are the more efficient ways to capture the frames through webcam to achieve this?
You can capture still images directly from the <video> element used to preview the stream from .getUserMedia(). You set up that preview, of course, by doing this sort of thing (pseudocode).
const stream = await navigator.getUserMedia(options)
const videoElement = document.querySelector('video#whateverId')
videoElement.srcObject = stream
videoElement.play()
Next, make yourself a canvas object and a context for it. It doesn't have to be visible.
const scratchCanvas = document.createElement('canvas')
scratchCanvas.width = video.videoWidth
scratchCanvas.height = video.videoHeight
const scratchContext = scratchCanvas.getContext('2d')
Now you can make yourself a function like this.
function stillCapture(video, canvas, context) {
context.drawImage( video, 0, 0, video.videoWidth, video.videoHeight)
canvas.toBlob(
function (jpegBlob) {
/* do something useful with the Blob containing jpeg */
}, 'image/jpeg')
}
A Blob containing a jpeg version of a still capture shows up in the callback. Do with it whatever you need to do.
Then, invoke that function every so often. For example, to get approximately 15fps, do this.
const howOften = 1000.0 / 15.0
setInterval (stillCapture, howOften, videoElement, scratchCanvas, scratchContext)
All this saves you the extra work of using .takePhoto().
I have an HTML5 video that is rather large. I'm also using Chrome. The video element has the loop attribute but each time the video "loops", the browser re-downloads the video file. I have set Cache-Control "max-age=15768000, private". However, this does not prevent any extra downloads of the identical file. I am using Amazon S3 to host the file. Also the s3 server responds with the Accepts Ranges header which causes the several hundred partial downloads of the file to be requested with the 206 http response code.
Here is my video tag:
<video autoplay="" loop="" class="asset current">
<source src="https://mybucket.s3.us-east-2.amazonaws.com/myvideo.mp4">
</video>
UPDATE:
It seems that the best solution is to prevent the Accept Ranges header from being sent with the original response and instead use a 200 http response code. How can this be achieved so that the video is fully cached through an .htaccess file?
Thanks in advance.
I don't know for sure what's the real issue you are facing.
It could be that Chrome has a max-size limit to what they'd cache, and if it the case, then not using Range-Requests wouldn't solve anything.
An other possible explanation is that caching media is not really a simple task.
Without seeing your file it's hard to tell for sure in which case you are, but you have to understand that to play a media, the browser doesn't need to fetch the whole file.
For instance, you can very well play a video file in an <audio> element, since the video stream won't be used, a browser could very well omit it completely and download only the audio stream. Not sure if any does, but they could. Most media formats do physically separate audio and video streams in the file and their byte positions are marked in the metadata.
They could certainly cache the Range-Requests they perform, but I think it's still quite rare they do.
But as tempting it might be to disable Range-Requests, you've to know that some browsers (Safari) will not play your media if your server doesn't allow Range-Requests.
So even then, it's probably not what you want.
The first thing you may want to try is to optimize your video for web usage. Instead of mp4, serve a webm file. These will generally take less space for the same quality and maybe you'll avoid the max-size limitation.
If the resulting file is still too big, then a dirty solution would be to use a MediaSource so that the file is kept in memory and you need to fetch it only once.
In the following example, the file will be fetched entirely only once, by chunks of 1MB, streamed by the MediaSource as it's being fetched and then only the data in memory will be used for looping plays:
document.getElementById('streamVid').onclick = e => (async () => {
const url = 'https://upload.wikimedia.org/wikipedia/commons/transcoded/2/22/Volcano_Lava_Sample.webm/Volcano_Lava_Sample.webm.360p.webm';
// you must know the mimeType of your video before hand.
const type = 'video/webm; codecs="vp8, vorbis"';
if( !MediaSource.isTypeSupported( type ) ) {
throw 'Unsupported';
}
const source = new MediaSource();
source.addEventListener('sourceopen', sourceOpen);
document.getElementById('out').src = URL.createObjectURL( source );
// async generator Range-Fetcher
async function* fetchRanges( url, chunk_size = 1024 * 1024 ) {
let chunk = new ArrayBuffer(1);
let cursor = 0;
while( chunk.byteLength ) {
const resp = await fetch( url, {
method: "get",
headers: { "Range": "bytes=" + cursor + "-" + ( cursor += chunk_size ) }
}
)
chunk = resp.ok && await resp.arrayBuffer();
cursor++; // add one byte for next iteration, Ranges are inclusive
yield chunk;
}
}
// set up our MediaSource
async function sourceOpen() {
const buffer = source.addSourceBuffer( type );
buffer.mode = "sequence";
// waiting forward to appendAsync...
const appendBuffer = ( chunk ) => {
return new Promise( resolve => {
buffer.addEventListener( 'update', resolve, { once: true } );
buffer.appendBuffer( chunk );
} );
}
// while our RangeFetcher is running
for await ( const chunk of fetchRanges(url) ) {
if( chunk ) { // append to our MediaSource
await appendBuffer( chunk );
}
else { // when done
source.endOfStream();
}
}
}
})().catch( console.error );
<button id="streamVid">stream video</button>
<video id="out" controls muted autoplay loop></video>
Google chrome has a limit to the size of it's file catch. In this case my previous answer would not work. You should use something like file-compressor this resource may be able to compress the file enough that it makes the file cache eligible. The webbrowser can have a new cache size manually set however this is not doable if the end user has not designated their cache to agree with the space required to hold the long video.
A possibility that people that got here are facing - the dev-tool has a "Disable Cache" Option under Network tab. When enabled (meaning cache is disabled) the browser probably doesn't cache the videos, hence needs to download them again.
disable cache from network tab
I have an array of Blobs (binary data, really -- I can express it however is most efficient. I'm using Blobs for now but maybe a Uint8Array or something would be better). Each Blob contains 1 second of audio/video data. Every second a new Blob is generated and appended to my array. So the code roughly looks like so:
var arrayOfBlobs = [];
setInterval(function() {
arrayOfBlobs.append(nextChunk());
}, 1000);
My goal is to stream this audio/video data to an HTML5 element. I know that a Blob URL can be generated and played like so:
var src = URL.createObjectURL(arrayOfBlobs[0]);
var video = document.getElementsByTagName("video")[0];
video.src = src;
Of course this only plays the first 1 second of video. I also assume I can trivially concatenate all of the Blobs currently in my array somehow to play more than one second:
// Something like this (untested)
var concatenatedBlob = new Blob(arrayOfBlobs);
var src = ...
However this will still eventually run out of data. As Blobs are immutable, I don't know how to keep appending data as it's received.
I'm certain this should be possible because YouTube and many other video streaming services utilize Blob URLs for video playback. How do they do it?
Solution
After some significant Googling I managed to find the missing piece to the puzzle: MediaSource
Effectively the process goes like this:
Create a MediaSource
Create an object URL from the MediaSource
Set the video's src to the object URL
On the sourceopen event, create a SourceBuffer
Use SourceBuffer.appendBuffer() to add all of your chunks to the video
This way you can keep adding new bits of video without changing the object URL.
Caveats
The SourceBuffer object is very picky about codecs. These have to be declared, and must be exact, or it won't work
You can only append one blob of video data to the SourceBuffer at a time, and you can't append a second blob until the first one has finished (asynchronously) processing
If you append too much data to the SourceBuffer without calling .remove() then you'll eventually run out of RAM and the video will stop playing. I hit this limit around 1 hour on my laptop
Example Code
Depending on your setup, some of this may be unnecessary (particularly the part where we build a queue of video data before we have a SourceBuffer then slowly append our queue using updateend). If you are able to wait until the SourceBuffer has been created to start grabbing video data, your code will look much nicer.
<html>
<head>
</head>
<body>
<video id="video"></video>
<script>
// As before, I'm regularly grabbing blobs of video data
// The implementation of "nextChunk" could be various things:
// - reading from a MediaRecorder
// - reading from an XMLHttpRequest
// - reading from a local webcam
// - generating the files on the fly in JavaScript
// - etc
var arrayOfBlobs = [];
setInterval(function() {
arrayOfBlobs.append(nextChunk());
// NEW: Try to flush our queue of video data to the video element
appendToSourceBuffer();
}, 1000);
// 1. Create a `MediaSource`
var mediaSource = new MediaSource();
// 2. Create an object URL from the `MediaSource`
var url = URL.createObjectURL(mediaSource);
// 3. Set the video's `src` to the object URL
var video = document.getElementById("video");
video.src = url;
// 4. On the `sourceopen` event, create a `SourceBuffer`
var sourceBuffer = null;
mediaSource.addEventListener("sourceopen", function()
{
// NOTE: Browsers are VERY picky about the codec being EXACTLY
// right here. Make sure you know which codecs you're using!
sourceBuffer = mediaSource.addSourceBuffer("video/webm; codecs=\"opus,vp8\"");
// If we requested any video data prior to setting up the SourceBuffer,
// we want to make sure we only append one blob at a time
sourceBuffer.addEventListener("updateend", appendToSourceBuffer);
});
// 5. Use `SourceBuffer.appendBuffer()` to add all of your chunks to the video
function appendToSourceBuffer()
{
if (
mediaSource.readyState === "open" &&
sourceBuffer &&
sourceBuffer.updating === false
)
{
sourceBuffer.appendBuffer(arrayOfBlobs.shift());
}
// Limit the total buffer size to 20 minutes
// This way we don't run out of RAM
if (
video.buffered.length &&
video.buffered.end(0) - video.buffered.start(0) > 1200
)
{
sourceBuffer.remove(0, video.buffered.end(0) - 1200)
}
}
</script>
</body>
</html>
As an added bonus this automatically gives you DVR functionality for live streams, because you're retaining 20 minutes of video data in your buffer (you can seek by simply using video.currentTime = ...)
Adding to the previous answer...
make sure to add sourceBuffer.mode = 'sequence' in the MediaSource.onopen event handler to ensure the data is appended based on the order it is received. The default value is segments, which buffers until the next 'expected' timeframe is loaded.
Additionally, make sure that you are not sending any packets with a data.size === 0, and make sure that there is 'stack' by clearing the stack on the broadcasting side, unless you are wanting to record it as an entire video, in which case just make sure the size of the broadcast video is small enough, and that your internet speed is fast. The smaller and lower the resolution the more likely you can keep a realtime connection with a client, ie a video call.
For iOS the broadcast needs to made from a iOS/macOS application, and be in mp4 format. The video chunk gets saved to the app's cache and then removed once it is sent to the server. A client can connect to the stream using either a web browser or app across nearly any device.
I am trying to buffer MP3 songs using node js and socket io in real time. I basically divide the MP3 into segments of bytes and send it over to the client side where the web audio API will receive it, decode it and start to play it. The issue here is that the sound does not play continuously, there is a something like a 0.5 seconds gap between every buffered segment. How can I solve this problem
// buffer is a 2 seconds decoded audio ready to be played
// the function is called when a new buffer is recieved
function stream(buffer)
{
// creates a new buffer source and connects it to the Audio context
var source = context.createBufferSource();
source.buffer = buffer;
source.connect(context.destination);
source.loop = false;
// sets it and updates the time
source.start(time + context.currentTime);
time += buffer.duration; // time is global variable initially set to zero
}
The part where stream is called
// where stream bufferQ is an array of decoded MP3 data
// so the stream function is called after every 3 segments that are recieved
// the Web audio Api plays it with gaps between the sound
if(bufferQ.length == 3)
{
for(var i = 0, n = bufferQ.length ; i < n; i++)
{
stream(bufferQ.splice(0,1)[0]);
}
}
should I use a different API other than the web audio API or is there a way to schedule my buffer so that it would be played continuously ?
context.currentTime will vary depending on when it is evaluated, and every read has an implicit inaccuracy due to being rounded to the nearest 2ms or so (see Firefox BaseAudioContext/currentTime#Reduced time precision). Consider:
function stream(buffer)
{
...
source.start(time + context.currentTime);
time += buffer.duration; // time is global variable initially set to zero
Calling source.start(time + context.currentTime) for every block of PCM data will always start the playback of that block at whatever the currentTime is now (which is not necessarily related to the playback time) rounded to 2ms, plus the time offset.
For playing back-to-back PCM chunks, read currentTime once at the beginning of the stream, then add each duration to it after scheduling the playback. For example, PCMPlayer does:
PCMPlayer.prototype.flush = function() {
...
if (this.startTime < this.audioCtx.currentTime) {
this.startTime = this.audioCtx.currentTime;
}
...
bufferSource.start(this.startTime);
this.startTime += audioBuffer.duration;
};
Note startTime is only reset when it represents a time in the past - for continuous buffered streaming it is not reset as it will be a value some time in the future. In each call to flush, startTime is used to schedule playback, and is only increased by each PCM data duration, it does not depend on currentTime.
Another potential issue is that the sample rate of the PCM buffer that you are decoding may not match the sample rate of the AudioContext. In this case, the browser resamples each PCM buffer separately, resulting in discontinuities at the boundaries of the chunks. See Clicking sounds in Stream played with Web Audio Api.
It's an issue with mp3 files, each mp3 file has a few frames of silence at the start and end.
If you use wav files or time the start and stop of each file properly you can fix it
I've been working on using the html audio tag to play some audio files. The audio plays alright, but the duration property of the audio tag is always returning infinity.
I tried the accepted answer to this question but with the same result. Tested with Chrome, IE and Firefox.
Is this a bug with the audio tag, or am I missing something?
Some of the code I'm using to play the audio files.
javascript function when playbutton is pressed
function playPlayerV2(src) {
document.getElementById("audioplayerV2").addEventListener("loadedmetadata", function (_event) {
console.log(player.duration);
});
var player = document.getElementById("audioplayer");
player.src = "source";
player.load();
player.play();
}
the audio tag in html
<audio controls="true" id="audioplayerV2" style="display: none;" preload="auto">
note: I'm hiding the standard audio player with the intend of using custom layout and make use of the player via javascript, this does not seem to be related to my problem.
try this
var getDuration = function (url, next) {
var _player = new Audio(url);
_player.addEventListener("durationchange", function (e) {
if (this.duration!=Infinity) {
var duration = this.duration
_player.remove();
next(duration);
};
}, false);
_player.load();
_player.currentTime = 24*60*60; //fake big time
_player.volume = 0;
_player.play();
//waiting...
};
getDuration ('/path/to/audio/file', function (duration) {
console.log(duration);
});
I think this is due to a chrome bug. Until it's fixed:
if (video.duration === Infinity) {
video.currentTime = 10000000;
setTimeout(() => {
video.currentTime = 0; // to reset the time, so it starts at the beginning
}, 1000);
}
let duration = video.duration;
This works for me
const audio = document.getElementById("audioplayer");
audio.addEventListener('loadedmetadata', () => {
if (audio.duration === Infinity) {
audio.currentTime = 1e101
audio.addEventListener('timeupdate', getDuration)
}
})
function getDuration() {
audio.currentTime = 0
this.voice.removeEventListener('timeupdate', getDuration)
console.log(audio.duration)
},
In case you control the server and can make it to send proper media header - this what helped the OP.
I faced this problem with files stored in Google Drive when getting them in Mobile version of Chrome. I cannot control Google Drive response and I have to somehow deal with it.
I don't have a solution that satisfies me yet, but I tried the idea from both posted answers - which basically is the same: make audio/video object to seek the real end of the resource. After Chrome finds the real end position - it gives you the duration. However the result is unsatisfying.
What this hack really makes - it forces Chrome to load the resource into the memory completely. So, if the resource is too big, or connection is too slow you end up waiting a long time for the file to be downloaded behind the scenes. And you have no control over that file - it is handled by Chrome and once it decides that it is no longer needed - it will dispose it, so the bandwidth may be spent ineficciently.
So, in case you can load the file yourself - it is better to download it (e.g. as blob) and feed it to your audio/video control.
If this is a Twilio mp3, try the .wav version. The mp3 is coming across as a stream and it fools the audio players.
To use the .wav version, just change the format of the source url from .mp3 to .wav (or leave it off, wav is the default)
Note - the wav file is 4x larger, so that's the downside to switching.
Not a direct answer but in case anyone using blobs came here, I managed to fix it using a package called webm-duration-fix
import fixWebmDuration from "webm-duration-fix";
...
fixedBlob = await fixWebmDuration(blob);
...
//If you want to modify the video file completely, you can use this package "webmFixDuration" Other methods are applied at the display level only on the video tag With this method, the complete video file is modified
webmFixDuration github example
mediaRecorder.onstop = async () => {
const duration = Date.now() - startTime;
const buggyBlob = new Blob(mediaParts, { type: 'video/webm' });
const fixedBlob = await webmFixDuration(buggyBlob, duration);
displayResult(fixedBlob);
};