I have a problem playing a H264 video using javascript MediaSource Extension API.
I'll describe the scenario with details below.
I've already successfully achieved the result playing audio and video source
of vp8, vp9, opus and vorbis codec, also from a range request ( if server has the capability, using any byte range ) or chunked files, chunks done using shaka packager.
The problem comes when the source is an H264 video, in details in my case
codecs are avc1.64001e and mp4a.40.2, full codec string is
video/mp4;codecs="avc1.64001e, mp4a.40.2" but the issue still happens with any other avc1 codec.
What I am trying to do is to play a 10 megabytes chunk of the full video,
chunk generated by a byterange curl request saving the response locally using -o.
Below the stream info from shaka packager passing this file as input
[0530/161459:INFO:demuxer.cc(88)] Demuxer::Run() on file '10mega.mp4'.
[0530/161459:INFO:demuxer.cc(160)] Initialize Demuxer for file '10mega.mp4'.
File "10mega.mp4":
Found 2 stream(s).
Stream [0] type: Video
codec_string: avc1.64001e
time_scale: 17595
duration: 57805440 (3285.3 seconds)
is_encrypted: false
codec: H264
width: 720
height: 384
pixel_aspect_ratio: 1:1
trick_play_factor: 0
nalu_length_size: 4
Stream [1] type: Audio
codec_string: mp4a.40.2
time_scale: 44100
duration: 144883809 (3285.3 seconds)
is_encrypted: false
codec: AAC
sample_bits: 16
num_channels: 2
sampling_frequency: 44100
language: und
Packaging completed successfully.
The chunk is playable with external media player applications ( like VLC )
and more important, it plays without problem adding it to the webpage using the < source > tag.
This is the error I can see in the Chrome console
Uncaught (in promise) DOMException: Failed to load because no supported source was found.
Here below the html and js code if you want to reproduce ( I did all local tests using the built-in php7.2 dev server )
<!DOCTYPE html>
<html>
<head>
<meta charset="utf-8"/>
<title>VideoTest</title>
<link rel="icon" href="/favicon.ico" />
<script type="text/javascript" src="/script.js"></script>
<style>
video {
width: 98%;
height: 300px;
border: 0px solid #000;
display: flex;
}
</style>
</head>
<body>
<div id="videoContainer">
<video controls></video>
</div>
<video controls>
<source src="/media/10mega.mp4" type="video/mp4">
</video>
</body>
</html>
And here below the JS code ( scripjs )
class MediaTest {
constructor() {
}
init(link) {
this.link = link;
this.media = new MediaSource();
this.container = document.getElementsByTagName('video')[0];
this.container.src = window.URL.createObjectURL(this.media);
return new Promise(resolve => {
this.media.addEventListener('sourceopen', (e) => {
this.media = e.target;
return resolve(this);
});
});
}
addSourceBuffer() {
let codec = 'video/mp4;codecs="avc1.64001e, mp4a.40.2"';
let sourceBuffer = this.media.addSourceBuffer(codec);
// These are the same headers sent by the < source > tag
// with or without the issue remains
let headers = new Headers({
'Range': 'bytes=0-131072',
'Accept-Encoding': 'identity;q=1, *;q=0'
});
let requestData = {
headers: headers
};
let request = new Request(this.link, requestData);
return new Promise(resolve => {
fetch(request).then((response) => {
if(200 !== response.status) {
throw new Error('addSourceBuffer error with status ' + response.status);
}
return response.arrayBuffer();
}).then((buffer) => {
sourceBuffer.appendBuffer(buffer);
console.log('Buffer appended');
return resolve(this);
}).catch(function(e) {
console.log('addSourceBuffer error');
console.log(e);
});
});
}
play() {
this.container.play();
}
}
window.addEventListener('load', () => {
let media = new MediaTest();
media.init('/media/10mega.mp4').then(() => {
console.log('init ok');
return media.addSourceBuffer();
}).then((obj) => {
console.log('play');
media.play();
});
});
What I want to achieve is to play the file with MediaSource API since it plays well using < source > tag.
I don't want to demux and re-encode it, but use it as is.
Here below the error dump taken from chrome://media-internals
render_id: 180 player_id: 11 pipeline_state: kStopped event:
WEBMEDIAPLAYER_DESTROYED
To reproduce I think it is possible to use any H264 video that has audio and video track within it.
This question is strictly related with this other question I've found H264 video works using src attribute. Same video fails using the MediaSource API (Chromium) but it is from 4 years ago so I decided not to answer there.
Does anybody have some idea about this issue?
Is there any way to solve it or h264 It is just not compatible with MSE?
Thanks in advance
Its not the codec, its the container. MSE requires fragmented mp4 files. Standard mp4 is not supported. for standard mp4 you must use <video src="my.mp4">
Related
Ok, so Im an IT guy and kind of a noob on the dev side of the fence. But I've been able to create this ffmpeg wasm page that takes a canvas and converts it to webm / and .mp4 -- what i WANT to do is take the resulting .mp4 file and upload it to the server where the page/js are being served from. is this possible? I will include my source code which is fairly simple and straight forward, I just don't know how to manipulate the resulting mp4 file that ffmpeg spits out (i realize it is happening client side) to be able to push it up to the server (maybe with aupload.php type situation?) the solution can be html/java/php whatever, so long as it takes the mp4 output and gets it onto the server. I'd VERY MUCH appreciate a hand here.
Going to try my best to properly insert the html and js. please bear with me if i've done something wrong, i've never had to -ask- a question on here, usually just look up existing answers.
const { createFFmpeg } = FFmpeg;
const ffmpeg = createFFmpeg({
log: true
});
const transcode = async (webcamData) => {
const message = document.getElementById('message');
const name = 'record.webm';
await ffmpeg.load();
message.innerHTML = 'Start transcoding';
await ffmpeg.write(name, webcamData);
await ffmpeg.transcode(name, 'output.mp4');
message.innerHTML = 'Complete transcoding';
const data = ffmpeg.read('output.mp4');
const video = document.getElementById('output-video');
video.src = URL.createObjectURL(new Blob([data.buffer], { type: 'video/mp4' }));
dl.href = video.src;
dl.innerHTML = "download mp4"
}
fn().then(async ({url, blob})=>{
transcode(new Uint8Array(await (blob).arrayBuffer()));
})
function fn() {
var recordedChunks = [];
var time = 0;
var canvas = document.getElementById("canvas");
return new Promise(function (res, rej) {
var stream = canvas.captureStream(60);
mediaRecorder = new MediaRecorder(stream, {
mimeType: "video/webm; codecs=vp9"
});
mediaRecorder.start(time);
mediaRecorder.ondataavailable = function (e) {
recordedChunks.push(event.data);
// for demo, removed stop() call to capture more than one frame
}
mediaRecorder.onstop = function (event) {
var blob = new Blob(recordedChunks, {
"type": "video/webm"
});
var url = URL.createObjectURL(blob);
res({url, blob}); // resolve both blob and url in an object
myVideo.src = url;
// removed data url conversion for brevity
}
// for demo, draw random lines and then stop recording
var i = 0,
tid = setInterval(()=>{
if(i++ > 20) { // draw 20 lines
clearInterval(tid);
mediaRecorder.stop();
}
let canvas = document.querySelector("canvas");
let cx = canvas.getContext("2d");
cx.beginPath();
cx.strokeStyle = 'green';
cx.moveTo(Math.random()*100, Math.random()*100);
cx.lineTo(Math.random()*100, Math.random()*100);
cx.stroke();
},200)
});
}
<html>
<head>
<script src="https://unpkg.com/#ffmpeg/ffmpeg#0.8.1/dist/ffmpeg.min.js" defer></script>
<script src="canvas2mp4.js" defer></script>
</head>
<body>
here is a canvas<br>
<canvas id="canvas" style="height:100px;width:100px"></canvas><br>
here is a recorded video of the canvas in webM format<br>
<video id="myVideo" controls="controls"></video><br>
here is a transcoded mp4 from the webm above CLIENT SIDE using ffmpeg<br>
<video id="output-video" controls="controls"></video><br>
<a id="dl" href="" download="download.mp4"></a>
<div id="message"></div><br><br>
</body>
</html>
I'm creating a video recorder script using JavaScript and the MediaRecorder API. I'm using a video capture as source. The video output is 1920 x 1080 but I'm trying to shrink this resolution to 640 x 360 (360p).
I will write all the code below. I tried many configurations and variants of HTML and JS, and according to this site my video source can fit that size I'm trying to force.
The video source is from this elgato camlink 4k
UPDATE
Instead of using exact in video constraints, replace it with ideal and it will see if this resolution is available in the device.
The elgato camlink device don't support 360p apparently, I tested with external webcam which does support 360p and using ideal it works.
Using windows camera settings you can see there is no other resolutions available on elgato camlink only HD and FHD.
The HTML tag:
<video id="videoEl" width="640" height="360" autoplay canplay resize></video>
This is the getUSerMedia() script:
const video = document.getElementById('videoEl');
const constraints = {
audio: { deviceId: audioDeviceId },
video: {
deviceId: videoDeviceId,
width: { exact: 640 },
height: { exact: 360 },
frameRate: 30
}
};
this.CameraStream = await navigator.mediaDevices.getUserMedia(constraints);
video.srcObject = this.CameraStream;
Before that I choose the video source using navigator.mediaDevices.enumerateDevices();
Then I tried some options for the MediaRecorder constructor:
this.MediaRecorder = new MediaRecorder(this.CameraStream)
this.MediaRecorder = new MediaRecorder(this.CameraStream, { mimeType: 'video/webm' })
Found this mimeType in this Forum
this.MediaRecorder = new MediaRecorder(this.CameraStream, { mimeType: 'video/x-matroska;codecs=h264' })
And the event listener
this.MediaRecorder.addEventListener('dataavailable', event => {
this.BlobsRecorded.push(event.data);
});
MediaRecorder on stop
As I mention before, I tried some variants of options:
const options = { type: 'video/x-matroska;codecs=h264' };
const options = { type: 'video/webm' };
const options = { type: 'video/mp4' }; // not supported
const finalVideo = URL.createObjectURL(
new Blob(this.BlobsRecorded, options)
);
Note
Everything is working perfectly, I just leave the code to let you see the used constraints and for illustrative purposes. If there is something missing let me know to put it here.
Thank you for your time.
I am using plyr as wrapper around HTML5 video tag and using Hls.js to stream my .m3u8 video .
I was going around a lot of issues on plyr to enable quality selectors and came arounf multiple PR's which had this question but was closed saying the implementation is merged, till i came around this PR which says it's still open, but there was a custom implementation in the Comments which assured that it works . I was trying that implementation locally in order to check if we can add a quality selector but seems like i am missing something/ or the implementation dosent work .
<!DOCTYPE html>
<html>
<head>
<meta charset="utf-8">
<title>HLS Demo</title>
<link rel="stylesheet" href="https://cdn.plyr.io/3.5.10/plyr.css" />
<style>
body {
max-width: 1024px;
}
</style>
</head>
<body>
<video preload="none" id="player" autoplay controls crossorigin></video>
<script src="https://cdn.plyr.io/3.5.10/plyr.js"></script>
<script src="https://cdn.jsdelivr.net/hls.js/latest/hls.js"></script>
<script>
(function () {
var video = document.querySelector('#player');
var playerOptions= {
quality: {
default: '720',
options: ['720']
}
};
var player;
player = new Plyr(video,playerOptions);
if (Hls.isSupported()) {
var hls = new Hls();
hls.loadSource('https://content.jwplatform.com/manifests/vM7nH0Kl.m3u8');
//hls.loadSource('https://test-streams.mux.dev/x36xhzz/x36xhzz.m3u8');
hls.attachMedia(video);
hls.on(Hls.Events.MANIFEST_PARSED,function(event,data) {
// uncomment to see data here
// console.log('levels', hls.levels); we get data here but not able to see in settings .
playerOptions.quality = {
default: hls.levels[hls.levels.length - 1].height,
options: hls.levels.map((level) => level.height),
forced: true,
// Manage quality changes
onChange: (quality) => {
console.log('changes',quality);
hls.levels.forEach((level, levelIndex) => {
if (level.height === quality) {
hls.currentLevel = levelIndex;
}
});
}
};
});
}
// Start HLS load on play event
player.on('play', () => hls.startLoad());
// Handle HLS quality changes
player.on('qualitychange', () => {
console.log('changed');
if (player.currentTime !== 0) {
hls.startLoad();
}
});
})();
</script>
</body>
</html>
The above snippet works please run , but also if you uncomment the
line in HLS Manifest you will see we get data in levels and also pass
the data to player options but it dosent come up in settings.How can
we add a quality selector to plyr when using Hls stream .
I made a lengthy comment about this on github [1].
Working example: https://codepen.io/datlife/pen/dyGoEXo
The main idea to fix this is:
Configure Plyr options properly to allow the switching happen.
Let HLS perform the quality switching, not Plyr. Hence, we only need a single source tag in video tag.
<video>
<source
type="application/x-mpegURL"
<!-- contain all the stream -->
src="https://bitdash-a.akamaihd.net/content/sintel/hls/playlist.m3u8">
</video>
[1] https://github.com/sampotts/plyr/issues/1741#issuecomment-640293554
I have an application where I present some images, videos and audios.
The images works perfectly but I have problem with the audios and the videos.
Node API:
let config = new Config(),
client = config.Storage
export default class ImageCtrl {
product = (req, res, next) => {
var params = req.params,
point = params.filename.lastIndexOf('.'),
nameFile = params.filename.slice(0, point),
ext = params.filename.slice(point);
res.writeHead(200, {
'Cache-Control': 'no-cache'
});
let remote = `${nameFile}-thumb${ext}`;
if (nameFile.includes('audio')) {
remote = `${nameFile}${ext}`
}
client.download({
container: 'Multimedia',
remote: remote
}, function(err, result) {
// handle the download result
}).pipe(res);
}
}
config.ts
export default class Config {
Storage: any = pkgcloud.storage.createClient(config);
}
Angular4-HTML5
<video width="100%" *ngIf="product.mediacategory === 'video'" controls preload="none" [poster]="'/api/image/products/' + product._id + '-video.png'" controlsList="nodownload">
<source [src]="/api/image/products/5a69b1b32e4be51cb82a7659-video.webm" type="video/webm">
<source [src]="/api/image/products/5a69b1b32e4be51cb82a7659-video.mp4" type="video/mp4">
<source [src]="/api/image/products/5a69b1b32e4be51cb82a7659-video.ogv'" type="video/ogv">
</video>
For audio and image is the same API but only works in safari image. In chrome and in firefox audio, video and image works fine.
In safari it appears like this:
In chrome:
----If I remove [] of [src] it doesn't work on chrome.-----
Safari on iOS supports low-complexity AAC audio, MP3 audio, AIF audio, WAVE audio, and baseline profile MPEG-4 video. Safari on the desktop (Mac OS X and Windows) supports all media supported by the installed version of QuickTime, including any installed third-party codecs
click here for reference
I'm building a cross-platform web app where audio is generated on-the-fly on the server and live streamed to a browser client, probably via the HTML5 audio element. On the browser, I'll have Javascript-driven animations that must precisely sync with the played audio. "Precise" means that the audio and animation must be within a second of each other, and hopefully within 250ms (think lip-syncing). For various reasons, I can't do the audio and animation on the server and live-stream the resulting video.
Ideally, there would be little or no latency between the audio generation on the server and the audio playback on the browser, but my understanding is that latency will be difficult to control and probably in the 3-7 second range (browser-, environment-, network- and phase-of-the-moon-dependent). I can handle that, though, if I can precisely measure the actual latency on-the-fly so that my browser Javascript knows when to present the proper animated frame.
So, I need to precisely measure the latency between my handing audio to the streaming server (Icecast?), and the audio coming out of the speakers on the computer hosting the speaker. Some blue-sky possibilities:
Add metadata to the audio stream, and parse it from the playing audio (I understand this isn't possible using the standard audio element)
Add brief periods of pure silence to the audio, and then detect them on the browser (can audio elements yield the actual audio samples?)
Query the server and the browser as to the various buffer depths
Decode the streamed audio in Javascript and then grab the metadata
Any thoughts as to how I could do this?
Utilize timeupdate event of <audio> element, which is fired three to four times per second, to perform precise animations during streaming of media by checking .currentTime of <audio> element. Where animations or transitions can be started or stopped up to several times per second.
If available at browser, you can use fetch() to request audio resource, at .then() return response.body.getReader() which returns a ReadableStream of the resource; create a new MediaSource object, set <audio> or new Audio() .src to objectURL of the MediaSource; append first stream chunks at .read() chained .then() to sourceBuffer of MediaSource with .mode set to "sequence"; append remainder of chunks to sourceBuffer at sourceBuffer updateend events.
If fetch() response.body.getReader() is not available at browser, you can still use timeupdate or progress event of <audio> element to check .currentTime, start or stop animations or transitions at required second of streaming media playback.
Use canplay event of <audio> element to play media when stream has accumulated adequate buffers at MediaSource to proceed with playback.
You can use an object with properties set to numbers corresponding to .currentTime of <audio> where animation should occur, and values set to css property of element which should be animated to perform precise animations.
At javascript below, animations occur at every twenty second period, beginning at 0, and at every sixty seconds until the media playback has concluded.
<!DOCTYPE html>
<html xmlns="http://www.w3.org/1999/xhtml">
<head>
<meta charset="utf-8" />
<title></title>
<style>
body {
width: 90vw;
height: 90vh;
background: #000;
transition: background 1s;
}
span {
font-family: Georgia;
font-size: 36px;
opacity: 0;
}
</style>
</head>
<body>
<audio controls></audio>
<br>
<span></span>
<script type="text/javascript">
window.onload = function() {
var url = "/path/to/audio";
// given 240 seconds total duration of audio
// 240/12 = 20
// properties correspond to `<audio>` `.currentTime`,
// values correspond to color to set at element
var colors = {
0: "red",
20: "blue",
40: "green",
60: "yellow",
80: "orange",
100: "purple",
120: "violet",
140: "brown",
160: "tan",
180: "gold",
200: "sienna",
220: "skyblue"
};
var body = document.querySelector("body");
var mediaSource = new MediaSource;
var audio = document.querySelector("audio");
var span = document.querySelector("span");
var color = window.getComputedStyle(body)
.getPropertyValue("background-color");
//console.log(mediaSource.readyState); // closed
var mimecodec = "audio/mpeg";
audio.oncanplay = function() {
this.play();
}
audio.ontimeupdate = function() {
// 240/12 = 20
var curr = Math.round(this.currentTime);
if (colors.hasOwnProperty(curr)) {
// set `color` to `colors[curr]`
color = colors[curr]
}
// animate `<span>` every 60 seconds
if (curr % 60 === 0 && span.innerHTML === "") {
var t = curr / 60;
span.innerHTML = t + " minute" + (t === 1 ? "" : "s")
+ " of " + Math.round(this.duration) / 60
+ " minutes of audio";
span.animate([{
opacity: 0
}, {
opacity: 1
}, {
opacity: 0
}], {
duration: 2500,
iterations: 1
})
.onfinish = function() {
span.innerHTML = ""
}
}
// change `background-color` of `body` every 20 seconds
body.style.backgroundColor = color;
console.log("current time:", curr
, "current background color:", color
, "duration:", this.duration);
}
// set `<audio>` `.src` to `mediaSource`
audio.src = URL.createObjectURL(mediaSource);
mediaSource.addEventListener("sourceopen", sourceOpen);
function sourceOpen(event) {
// if the media type is supported by `mediaSource`
// fetch resource, begin stream read,
// append stream to `sourceBuffer`
if (MediaSource.isTypeSupported(mimecodec)) {
var sourceBuffer = mediaSource.addSourceBuffer(mimecodec);
// set `sourceBuffer` `.mode` to `"sequence"`
sourceBuffer.mode = "sequence";
fetch(url)
// return `ReadableStream` of `response`
.then(response => response.body.getReader())
.then(reader => {
var processStream = (data) => {
if (data.done) {
return;
}
// append chunk of stream to `sourceBuffer`
sourceBuffer.appendBuffer(data.value);
}
// at `sourceBuffer` `updateend` call `reader.read()`,
// to read next chunk of stream, append chunk to
// `sourceBuffer`
sourceBuffer.addEventListener("updateend", function() {
reader.read().then(processStream);
});
// start processing stream
reader.read().then(processStream);
// do stuff `reader` is closed,
// read of stream is complete
return reader.closed.then(() => {
// signal end of stream to `mediaSource`
mediaSource.endOfStream();
return mediaSource.readyState;
})
})
// do stuff when `reader.closed`, `mediaSource` stream ended
.then(msg => console.log(msg))
}
// if `mimecodec` is not supported by `MediaSource`
else {
alert(mimecodec + " not supported");
}
};
}
</script>
</body>
</html>
plnkr http://plnkr.co/edit/fIm1Qp?p=preview
There no way for you to measure latency directly, but any AudioElement generate events like 'playing' if it just played (fired quite often), or 'stalled' if stoped streaming, or 'waiting' if data is loading. So what you can do, is to manipulate your video based on this events.
So play while stalled or waiting is fired, then continue playing video if playing fired again.
But I advice you check other events that might affect your flow (error for example would be important for you).
https://developer.mozilla.org/en-US/docs/Web/API/HTMLAudioElement
What i would try is first create a timestamp with performance.now, process the data, and record it in a blob with the new web recorder api.
The web recorder will ask user access to his audio card, this can be a problem for your app, but it look like mandatory to get the real latency.
As soon this done, there is many way to measure the actual latency between the generation and the actual rendering. Basically, a sound event.
For further reference and example:
Recorder demo
https://github.com/mdn/web-dictaphone/
https://developer.mozilla.org/en-US/docs/Web/API/MediaRecorder_API/Using_the_MediaRecorder_API