I would to load a BLOB video as Youtube or Netflix for example.
That is, load the video from 0s to 10s in BLOB, display it and then load the video from 10s to 20s.
I thought about doing this code (which works without the #t =0,10 but waiting for the entire video to load before returning it).
var query = new XMLHttpRequest();
var videolink = "test.mp4";
var videobalise = $("video");
var get = videolink + "#t=0,10";
query.open("GET", get, true);
query.responseType = "blob";
query.onload = function(){
if(this.status === 200){
var currentTime = videobalise[0].currentTime;
var paused = false;
if(!videobalise[0].paused) paused = true;
window.URL = window.URL || window.webkitURL;
var videoBlob = this.response;
var video = window.URL.createObjectURL(videoBlob);
videobalise.attr("src", video);
videobalise[0].currentTime = currentTime;
if(paused == true){
videobalise[0].play();
}
}
}
query.send();
Thanks in advance,
Thomas
I can't tell for Netflix, but for YouTube, that's not what they do.
The blob URI you see as the src of their <video> element points to a MediaSource object, not to a Blob.
Now, the #t fragment identifier in a MediaElement src works only for MediaElements.
MediaElements will try to load only the required data from the server. This is done thanks to Range requests. But it requires that the media's metadata has been already fetched and parsed, so that the browsers can know at which byte-range the next data to load is located.
The #t fragment identifier is just a way to tell the browser that we are only interested in the range being defined in there, and thus, this fragment identifier also needs that the media's metadata are accessible, and provide the correct bytes offset.
All this to tell that you actually need the full file to be available at the URI on which you set this #t fragment identifier.
So in your case, you would have to load the full file from the original URI, and only on the blob URI that you'll set as the <video>'s src would you append the fragment identifier:
var url = "https://upload.wikimedia.org/wikipedia/commons/transcoded/a/a4/BBH_gravitational_lensing_of_gw150914.webm/BBH_gravitational_lensing_of_gw150914.webm.480p.webm";
fetch(url) // fetch the whole file
.then(resp => resp.blob())
.then(blob => {
const blobURI = URL.createObjectURL(blob);
const fragId = '#t=5' // starts at 5s
vid.src = blobURI + fragId; // here you set the fragId
});
video{height:100vh}
<video id="vid" controls></video>
So this is probably not what you want, since you would actually load the full file.
But I guess that what you really wanted was a way to hide the original file, in a desperate attempt to block your users from downloading it. Then, I am sorry to tell you that this won't work at all, because your AJAX request is clearly visible.
Note that even fetching yourself by range and using a MediaSource like YT won't work either, YT videos are actually downloadable.
The best move if you really want to do that might be to use the Encrypted Media Extensions API.
Related
navigator.mediaDevices.getUserMedia().then(stream=>{
//a recorder is created
var mediaRecorder = new MediaRecorder(stream);
//started it
mediaRecorder.start();
//an array is created that receives all the data
var recordedChunks = [];
//fill it
mediaRecorder.ondataavailable = function(e){recordedChunks.push(e.data)};
//when stopped downloads the recording
mediaRecorder.onstop=function(){
var blob = new Blob(recordedChunks, {
'type': 'video/mp4'
});
var url = URL.createObjectURL(blob);
var a = document.createElement('a');
document.body.appendChild(a);
a.style = 'display: none';
a.href = url;
a.download = 'test.webm';
a.click();
window.URL.revokeObjectURL(url);
}
}.catch()
This code works for me, but when the video is downloaded it is downloaded without the details
(left image: a video downloaded from youtube; right image: a video downloaded using mediaRecorder)
https://i.stack.imgur.com/IxmYD.png
And the other problem is that the necessary actions cannot be done in the videos (speed up, go to a specific time) since it does not have an end time
https://i.stack.imgur.com/yF7qx.png
What could I do to give it the required details - formats?
Here is a page that also has the same problems when downloading the recording from your webcam
https://webrtc.github.io/samples/src/content/getusermedia/record/
If you want to set the MIME media type for a recording created by MediaRecorder, you must do so when you call its constructor. For example:
stream = await navigator.mediaDevices.getUserMedia (constraints)
const mediaRecorder = new MediaRecorder(stream, {mimeType: 'video/mp4'})
But most browsers' (that is, Chromium and Firefox) MediaRecorders don't produce video/mp4. Instead they produce video/webm. You can use the MediaRecorder.isTypeSupported() to find out whether it can handle the type you want, or you can take whatever type it gives you. mediaRecorder.mimeType is a property of your MediaRecorder instance telling you what MIME type you get.
If you want to get mp4 recordings from those browsers you'll have to postprocess it.
And, of course you're correct that a live recording has no length. MediaRecorder produces a data stream suitable for playing live. Again, if you want to make it seekable and apply an end time, you need to use postprocessing. MediaRecorder doesn't do that.
ffmpeg is a decent way to postprocess video. Explaining how to do it is far beyond the scope of a StackOverflow answer.
I have an array of Blobs (binary data, really -- I can express it however is most efficient. I'm using Blobs for now but maybe a Uint8Array or something would be better). Each Blob contains 1 second of audio/video data. Every second a new Blob is generated and appended to my array. So the code roughly looks like so:
var arrayOfBlobs = [];
setInterval(function() {
arrayOfBlobs.append(nextChunk());
}, 1000);
My goal is to stream this audio/video data to an HTML5 element. I know that a Blob URL can be generated and played like so:
var src = URL.createObjectURL(arrayOfBlobs[0]);
var video = document.getElementsByTagName("video")[0];
video.src = src;
Of course this only plays the first 1 second of video. I also assume I can trivially concatenate all of the Blobs currently in my array somehow to play more than one second:
// Something like this (untested)
var concatenatedBlob = new Blob(arrayOfBlobs);
var src = ...
However this will still eventually run out of data. As Blobs are immutable, I don't know how to keep appending data as it's received.
I'm certain this should be possible because YouTube and many other video streaming services utilize Blob URLs for video playback. How do they do it?
Solution
After some significant Googling I managed to find the missing piece to the puzzle: MediaSource
Effectively the process goes like this:
Create a MediaSource
Create an object URL from the MediaSource
Set the video's src to the object URL
On the sourceopen event, create a SourceBuffer
Use SourceBuffer.appendBuffer() to add all of your chunks to the video
This way you can keep adding new bits of video without changing the object URL.
Caveats
The SourceBuffer object is very picky about codecs. These have to be declared, and must be exact, or it won't work
You can only append one blob of video data to the SourceBuffer at a time, and you can't append a second blob until the first one has finished (asynchronously) processing
If you append too much data to the SourceBuffer without calling .remove() then you'll eventually run out of RAM and the video will stop playing. I hit this limit around 1 hour on my laptop
Example Code
Depending on your setup, some of this may be unnecessary (particularly the part where we build a queue of video data before we have a SourceBuffer then slowly append our queue using updateend). If you are able to wait until the SourceBuffer has been created to start grabbing video data, your code will look much nicer.
<html>
<head>
</head>
<body>
<video id="video"></video>
<script>
// As before, I'm regularly grabbing blobs of video data
// The implementation of "nextChunk" could be various things:
// - reading from a MediaRecorder
// - reading from an XMLHttpRequest
// - reading from a local webcam
// - generating the files on the fly in JavaScript
// - etc
var arrayOfBlobs = [];
setInterval(function() {
arrayOfBlobs.append(nextChunk());
// NEW: Try to flush our queue of video data to the video element
appendToSourceBuffer();
}, 1000);
// 1. Create a `MediaSource`
var mediaSource = new MediaSource();
// 2. Create an object URL from the `MediaSource`
var url = URL.createObjectURL(mediaSource);
// 3. Set the video's `src` to the object URL
var video = document.getElementById("video");
video.src = url;
// 4. On the `sourceopen` event, create a `SourceBuffer`
var sourceBuffer = null;
mediaSource.addEventListener("sourceopen", function()
{
// NOTE: Browsers are VERY picky about the codec being EXACTLY
// right here. Make sure you know which codecs you're using!
sourceBuffer = mediaSource.addSourceBuffer("video/webm; codecs=\"opus,vp8\"");
// If we requested any video data prior to setting up the SourceBuffer,
// we want to make sure we only append one blob at a time
sourceBuffer.addEventListener("updateend", appendToSourceBuffer);
});
// 5. Use `SourceBuffer.appendBuffer()` to add all of your chunks to the video
function appendToSourceBuffer()
{
if (
mediaSource.readyState === "open" &&
sourceBuffer &&
sourceBuffer.updating === false
)
{
sourceBuffer.appendBuffer(arrayOfBlobs.shift());
}
// Limit the total buffer size to 20 minutes
// This way we don't run out of RAM
if (
video.buffered.length &&
video.buffered.end(0) - video.buffered.start(0) > 1200
)
{
sourceBuffer.remove(0, video.buffered.end(0) - 1200)
}
}
</script>
</body>
</html>
As an added bonus this automatically gives you DVR functionality for live streams, because you're retaining 20 minutes of video data in your buffer (you can seek by simply using video.currentTime = ...)
Adding to the previous answer...
make sure to add sourceBuffer.mode = 'sequence' in the MediaSource.onopen event handler to ensure the data is appended based on the order it is received. The default value is segments, which buffers until the next 'expected' timeframe is loaded.
Additionally, make sure that you are not sending any packets with a data.size === 0, and make sure that there is 'stack' by clearing the stack on the broadcasting side, unless you are wanting to record it as an entire video, in which case just make sure the size of the broadcast video is small enough, and that your internet speed is fast. The smaller and lower the resolution the more likely you can keep a realtime connection with a client, ie a video call.
For iOS the broadcast needs to made from a iOS/macOS application, and be in mp4 format. The video chunk gets saved to the app's cache and then removed once it is sent to the server. A client can connect to the stream using either a web browser or app across nearly any device.
maybe i just got it wrong but... im requesting "large" files via ajax (180mb - 500mb). i thought that im able to fetch and use the data with the method URL.createObjectURL while its actually loading? i need the requested data within 5 seconds but its acutually loading 16 seconds.
ajax request
xhr.onload (worked within 5 seconds or faster, locally, but not live)
within the onload (or progress, onreadystatechange (i tried)) i used URL.createObjectURL(xhr.response) to get the data
var nxtclp = new XMLHttpRequest();
nxtclp.onload = function() {
get_src = URL.createObjectURL(nxtclp.response);
that.preloadSource = get_src;
};
nxtclp.open("GET", "media/vid.mp4");
nxtclp.responseType = "blob";
nxtclp.send();
is there any way to playback data while loading ?
Use autoplay attribute at <video> element
autoplay
A Boolean attribute; if specified, the video automatically begins to
play back as soon as it can do so without stopping to finish loading
the data.
<video controls autoplay src="http://mirrors.creativecommons.org/movingimages/webm/ScienceCommonsJesseDylan_240p.webm">
</video>
alternatively, using javascript
var video = document.createElement("video");
video.autoplay = true;
video.controls = true;
video.onloadedmetadata = (e) => {
console.log(video.readyState);
document.body.appendChild(e.path[0])
}
video.src = "http://mirrors.creativecommons.org/movingimages/webm/ScienceCommonsJesseDylan_240p.webm";
<body>
</body>
I am trying to upload a video to server, and on client end. I am reading it using FileReader's readAsBinaryString().
Now, my problem is, I don't know how to read duration of this video file.
If i try reading the file, and assigning the reader's data to a video tag's source, then none of the events associated to the video tag are fired. I need to find the duration of file uploaded on client end.
Can somebody please suggest me something?
You can do something like this for that to work:
read the file as ArrayBuffer (this can be posted directly to server as a binary stream later)
wrap it in a Blob object
create an object URL for the blob
and finally set the url as the video source.
When the video object triggers the loadedmetadata event you should be able to read the duration.
You could use data-uri too, but notice that browsers may apply size limits (as well as other disadvantages) for them which is essential when it comes to video files, and there is a significant encoding/decoding overhead due to the Base-64 process.
Example
Select a video file you know the browser can handle (in production you should of course filter accepted file types based on video.canPlayType()).
The duration will show after the above steps has performed (no error handling included in the example, adjust as needed).
var fileEl = document.querySelector("input");
fileEl.onchange = function(e) {
var file = e.target.files[0], // selected file
mime = file.type, // store mime for later
rd = new FileReader(); // create a FileReader
rd.onload = function(e) { // when file has read:
var blob = new Blob([e.target.result], {type: mime}), // create a blob of buffer
url = (URL || webkitURL).createObjectURL(blob), // create o-URL of blob
video = document.createElement("video"); // create video element
video.preload = "metadata"; // preload setting
video.addEventListener("loadedmetadata", function() { // when enough data loads
document.querySelector("div")
.innerHTML = "Duration: " + video.duration + "s"; // show duration
(URL || webkitURL).revokeObjectURL(url); // clean up
// ... continue from here ...
});
video.src = url; // start video load
};
rd.readAsArrayBuffer(file); // read file object
};
<input type="file"><br><div></div>
you can do something like below, the trick is to use readAsDataURL:
var reader = new FileReader();
reader.onload = function() {
var media = new Audio(reader.result);
media.onloadedmetadata = function(){
media.duration; // this would give duration of the video/audio file
};
};
reader.readAsDataURL(file);
Fiddle Demo
Here's a fiddle to show the problem. Basically, whenever the createMediaElementSource method of an AudioContext object is called, the output of the audio element is re-routed into the returned MediaElementAudioSourceNode. This is all fine and according to spec; however, when I then try to reconnect the output to the speakers (using the destination of the AudioContext), nothing happens.
Am I missing something obvious here? Maybe it has to do with cross-domain audio files? I just couldn't find any information on the topic on Google, and didn't see a note of it in the specs.
Code from the fiddle is:
var a = new Audio();
a.src = "http://webaudioapi.com/samples/audio-tag/chrono.mp3";
a.controls = true;
a.loop = true;
a.autoplay = true;
document.body.appendChild(a);
var ctx = new AudioContext();
// PROBLEM HERE
var shouldBreak = true;
var src;
if (shouldBreak) {
// this one stops playback
// it should redirect output from audio element to the MediaElementAudioSourceNode
// but src.connect(ctx.destination) does not fix it
src = ctx.createMediaElementSource(a);
src.connect(ctx.destination);
}
Yes, the Web Audio API requires that the audio adhere to the Same-Origin Policy. If the audio you're attempting to play is not from the same origin then the appropriate Access-Control headers are required. The resource in your example does not have the required headers.