Capture first frame of an embedded video - javascript

I want to capture the first frame of an embedded video as an image without using any server side scripting. Probably with javascript, is it possible?

Actually, pretty sure you can, using HTML5. Take a look at this link: HTML5 Video Destruction.
It's copying the video frame into another canvas every 33ms. You could play around with this and see if you can capture the first frame when the video starts running.
I can look into this further if you like (it fascinates me)
EDIT: oh my GOD THIS IS COOL. I just came up with a solution. Go to sambro.is-super-awesome.com/videofirstframe/
You need to open this in Google Chrome. Firefox doesn't support mp4 (I think).
First time I've ever done anything with HTML5, I CANNOT wait until this is in the majority of browsers :)
EDIT EDIT: Okay I uploaded the .ogg version of this video too, and setup my web server to handle the video type correctly, Firefox should work in this little example too.
EDIT EDIT EDIT: Nitpickers wanting source up here, well here it is:
// Create a video element.
var vid = document.createElement("video");
// We don't want it to start playing yet.
vid.autoplay = false;
vid.loop = false;
// No need for user to see the video itself.
vid.style.display = "none";
// This will fire when there's some data loaded for the video, should be at least 1 frame here.
vid.addEventListener("loadeddata", function()
{
// Let's wait another 100ms just in case?
setTimeout(function()
{
// Create a canvas element, this is what user sees.
var canvas = document.createElement("canvas");
// Set it to same dimensions as video.
canvas.width = vid.videoWidth;
canvas.height = vid.videoHeight;
// Put it on page.
document.getElementById("done").innerHTML = "";
document.getElementById("done").appendChild(canvas);
// Get the drawing context for canvas.
var ctx = canvas.getContext("2d");
// Draw the current frame of video onto canvas.
ctx.drawImage(vid, 0, 0);
// Done!
});
}, false);
// Have to include .ogv for firefox. I don't think this is working atm because my webserver isn't serving up
// videos properly.
if(BrowserDetect.browser == "Firefox")
{
var source = document.createElement("source");
source.src = "BigBuckBunny_640x360.ogv";
source.type = "video/ogg";
vid.appendChild(source);
}
else
{
var source = document.createElement("source");
source.src = "BigBuckBunny_640x360.mp4";
source.type = "video/mp4";
vid.appendChild(source);
}
// Add video to document to start loading process now.
document.body.appendChild(vid);

Related

Adding panner / spacial audio to Web Audio Context from a WebRTC stream not working

I would like to create a Web Audio panner to position the sound from a WebRTC stream.
I have the stream connecting OK and can hear the audio and see the video, but the panner does not have any effect on the audio (changing panner.setPosition(10000, 0, 0) to + or - 10000 makes no difference to the sound).
This is the onaddstream function where the audio and video get piped into a video element and where I presume i need to add the panner.
There are no errors, it just isn't panning at all.
What am I doing wrong?
Thanks!
peer_connection.onaddstream = function(event) {
var AudioContext = window.AudioContext || window.webkitAudioContext;
var audioCtx = new AudioContext();
audioCtx.listener.setOrientation(0,0,-1,0,1,0)
var panner = audioCtx.createPanner();
panner.panningModel = 'HRTF';
panner.distanceModel = 'inverse';
panner.refDistance = 1;
panner.maxDistance = 10000;
panner.rolloffFactor = 1;
panner.coneInnerAngle = 360;
panner.coneOuterAngle = 0;
panner.coneOuterGain = 0;
panner.setPosition(10000, 0, 0); //this doesn't do anything
peerInput.connect(panner);
panner.connect(audioCtx.destination);
// attach the stream to the document element
var remote_media = USE_VIDEO ? $("<video>") : $("<audio>");
remote_media.attr("autoplay", "autoplay");
if (MUTE_AUDIO_BY_DEFAULT) {
remote_media.attr("muted", "false");
}
remote_media.attr("controls", "");
peer_media_elements[peer_id] = remote_media;
$('body').append(remote_media);
attachMediaStream(remote_media[0], event.stream);
}
Try to get the event stream before setting the panner
var source = audioCtx.createMediaStreamSource(event.stream);
Reference: Mozilla Developer Network - AudioContext
CreatePaneer Refernce: Mozilla Developer Network - createPanner
3rd Party Library: wavesurfer.js
Remove all the options you've set for the panner node and see if that helps. (The cone angles seem a little funny to me, but I always forget how they work.)
If that doesn't work, create a smaller test with the panner but use a simple oscillator as the input. Play around with the parameters and positions to make sure it does what you want.
Put this back into your app. Things should work then.
Figured this out for myself.
The problems was not the code, it was because I was connected with Bluetooth audio.
Bluetooth apparently can only do stereo audio with the microphone turned off. As soon as you activate the mic, that steals one of the channels and audio output downgrades to mono.
If you have mono audio, you definitely cannot do 3D positioned sound, hence me thinking the code was not working.

How to (feature) detect if browser supports WebM alpha transparency?

I'm integrating a *.webm Video with alpha transparency. At the moment, the transparency is only supported in Chrome and Opera. (Demo: http://simpl.info/videoalpha/) Firefox for example plays the video as it supports the WebM format, but instead of the transparency, there's a black background.
My plan is to display the video poster image instead of the video, if the browser does not support alpha transparency. So the video should only play, if the browser supports WebM alpha transparency. I know how to detect the browser or the rendering engine and therefore play the video (see code below) - but is there a "feature detection" way?
var supportsAlphaVideo = /Chrome/.test(navigator.userAgent) && /Google Inc/.test(navigator.vendor) || (/OPR/.test (navigator.userAgent));
if (supportsAlphaVideo) {
document.querySelector(".js-video").play();
}
See also http://updates.html5rocks.com/2013/07/Alpha-transparency-in-Chrome-video
Here's a working solution to test for alpha support in WebM.
I basically combined Capture first frame of an embedded video and check_webp_feature
The video used to test with is base64-encoded into the source. It's actually a tiny VP9 WebM video encoded using:
ffmpeg -i alpha.png -c:v libvpx-vp9 alpha.webm
If you want to test for VP8 alpha support instead, just encode your own and remove the -vp9. alpha.png is a 64x64 pixel 100% transparent PNG image.
var supportsWebMAlpha = function(callback)
{
var vid = document.createElement('video');
vid.autoplay = false;
vid.loop = false;
vid.style.display = "none";
vid.addEventListener("loadeddata", function()
{
document.body.removeChild(vid);
// Create a canvas element, this is what user sees.
var canvas = document.createElement("canvas");
//If we don't support the canvas, we definitely don't support webm alpha video.
if (!(canvas.getContext && canvas.getContext('2d')))
{
callback(false);
return;
}
// Get the drawing context for canvas.
var ctx = canvas.getContext("2d");
// Draw the current frame of video onto canvas.
ctx.drawImage(vid, 0, 0);
if (ctx.getImageData(0, 0, 1, 1).data[3] === 0)
{
callback(true);
}
else
{
callback(false);
}
}, false);
vid.addEventListener("error", function()
{
document.body.removeChild(vid);
callback(false);
});
vid.addEventListener("stalled", function()
{
document.body.removeChild(vid);
callback(false);
});
//Just in case
vid.addEventListener("abort", function()
{
document.body.removeChild(vid);
callback(false);
});
var source = document.createElement("source");
source.src="data:video/webm;base64,GkXfowEAAAAAAAAfQoaBAUL3gQFC8oEEQvOBCEKChHdlYm1Ch4ECQoWBAhhTgGcBAAAAAAACBRFNm3RALE27i1OrhBVJqWZTrIHlTbuMU6uEFlSua1OsggEjTbuMU6uEHFO7a1OsggHo7AEAAAAAAACqAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAVSalmAQAAAAAAADIq17GDD0JATYCNTGF2ZjU3LjU3LjEwMFdBjUxhdmY1Ny41Ny4xMDBEiYhARAAAAAAAABZUrmsBAAAAAAAARq4BAAAAAAAAPdeBAXPFgQGcgQAitZyDdW5khoVWX1ZQOYOBASPjg4QCYloA4AEAAAAAAAARsIFAuoFAmoECU8CBAVSygQQfQ7Z1AQAAAAAAAGfngQCgAQAAAAAAAFuhooEAAACCSYNCAAPwA/YAOCQcGFQAADBgAABnP///NXgndmB1oQEAAAAAAAAtpgEAAAAAAAAk7oEBpZ+CSYNCAAPwA/YAOCQcGFQAADBgAABnP///Vttk7swAHFO7awEAAAAAAAARu4+zgQC3iveBAfGCAXXwgQM=";
source.addEventListener("error", function()
{
document.body.removeChild(vid);
callback(false);
});
vid.appendChild(source);
//This is required for IE
document.body.appendChild(vid);
};
supportsWebMAlpha(function(result)
{
if (result)
{
alert('Supports WebM Alpha');
}
else
{
alert('Doesn\'t support WebM Alpha');
}
});
There are no properties exposed giving any information about the video and its channels.
The only way to do this is either:
Knowing in advance, incorporate that knowledge with the data and serve it to the browser when video is requested as meta-data
Use a canvas to analyze the image data
Load the file as binary data, then parse the webm format manually to extract this information. Do-able but very inconvenient as the complete file must be downloaded, and of course a parser must be made.
If you don't know in advance, or have no way to supply the metadata, then canvas is your best option.
Canvas
You can use a canvas to test for actual transparency, however, this do have CORS requirements (video must be on the same server, or the external server need to accept cross-origin usage).
Additionally you have to actually start loading the video which of course can have an impact on bandwidth as well as performance. You probably want to do this with a dynamically created video and canvas tag.
From there, it is fairly straight forward.
Create a small canvas
Draw a frame into it (one that is expected to have an alpha channel)
Extract the pixels (CORS requirements here)
Loop through the buffer using a Uint32Array view and check for alpha channel for values < 255 (pixel & 0xff000000 !== 0xff000000).
This is fairly fast to do, you can use a frame size of half or even smaller.

HTML5 camera buffering and delay (delayed mirror)

I'm not yet very familiar with HTML5 but have been looking for a project to delve into it.
Would the following functionality be possible using HTML5 and camera access?
Stage1: live camera replay with adjustable delay (aka delayed mirror)
Stage2: selecting parts of the previously recorded live stream and have replay options available (continuous loop, slow motion, drawing into the picture etc.)
Ideally this should run on an Android tablet.
This is meant as an application to provide immediate visual feedback for coaches and athletes.
Thanks for any feedback, it is much appreciated! :)
Tom
There are actually a few js libs that can record a webcam feed. Check out RecordRTC. Here is some example code that might work (I haven't tested).
navigator.getUserMedia = navigator.getUserMedia ||
navigator.webkitGetUserMedia || navigator.mozGetUserMedia;
function gotVideoStream(localMediaStream) {
var video = document.querySelector("video");
var recordRTC = RecordRTC(mediaStream);
recordRTC.startRecording();
recordRTC.stopRecording(function(videoURL) {
var playbackVideo = document.getElemenById('playback-vid');
playbackVideo.src = videoURL; // set src for playback
playbackVideo.playbackRate = .5; // slow down playback
});
// set src for live preview
video.src = window.URL.createObjectURL(localMediaStream);
video.play();
}
function errorCallback(error){
console.log("navigator.getUserMedia error: ", error);
}
// get things rolling
navigator.getUserMedia({video: true}, gotVideoStream, error);
If that doesn't work, Google the subject for more resources.
The MDN tutorial on taking pictures with a webcam provides most of the pieces you need to implement this in a simple way.
Request a video media stream and connect it to a video element.
Draw the video element to a canvas.
Copy the canvas either to a data URL or raw image data.
After a delay show it on another canvas or in an img element.
Here is an example I wrote implementing a delayed mirror.
This is fine for a few seconds of video. For example, I can practice dance moves with it. Recording and playing back longer streams, you might run into memory problems.

Detect if audio is playing in browser Javascript

Is there a global way to detect when audio is playing or starts playing in the browser.
something like along the idea of if(window.mediaPlaying()){...
without having the code tied to a specific element?
EDIT: What's important here is to be able to detect ANY audio no matter where the audio comes from. Whether it comes from an iframe, a video, the Web Audio API, etc.
No one should use this but it works.
Basically the only way that I found to access the entire window's audio is using MediaDevices.getDisplayMedia().
From there a MediaStream can be fed into an AnalyserNode that can be used to check the if the audio volume is greater than zero.
Only works in Chrome and maybe Edge (Only tested in Chrome 80 on Linux)
JSFiddle with <video>, <audio> and YouTube!
Important bits of code (cannot post in a working snippet because of the Feature Policies on the snippet iframe):
var audioCtx = new AudioContext();
var analyser = audioCtx.createAnalyser();
var bufferLength = analyser.fftSize;
var dataArray = new Float32Array(bufferLength);
window.isAudioPlaying = () => {
analyser.getFloatTimeDomainData(dataArray);
for (var i = 0; i < bufferLength; i++) {
if (dataArray[i] != 0) return true;
}
return false;
}
navigator.mediaDevices.getDisplayMedia({
video: true,
audio: true
})
.then(stream => {
if (stream.getAudioTracks().length > 0) {
var source = audioCtx.createMediaStreamSource(stream);
source.connect(analyser);
document.body.classList.add('ready');
} else {
console.log('Failed to get stream. Audio not shared or browser not supported');
}
}).catch(err => console.log("Unable to open capture: ", err));
I read all MDN docs about Web Audio API but I didn't find any global flag on window that shows audio playing. But I have found a tricky way that shows ANY audio playing, no matter an iframe or video but about Web Audio API:
const allAudio = Array.from( document.querySelectorAll('audio') );
const allVideo = Array.from( document.querySelectorAll('video') );
const isPlaying = [...allAudio, ...allVideo].some(item => !item.paused);
Now, by the isPlaying flag we can detect if any audio or video is playing in the browser.
There is a playbackState property (https://developer.mozilla.org/en-US/docs/Web/API/MediaSession/playbackState), but not all browsers support it.
if(navigator.mediaSession.playbackState === "playing"){...
I was looking for a solution in Google, but i didn't find anything yet.
Maybe you could check some data that has X value only when audio is playing. If you have some button that start playing the audio file, maybe you can be sure that the audio is playing by adding some event listener on the rep. button...
Maybe something like adding an event listener to the "audio" tag? If i remember correctly, audio tag has a "paused" attribute...
And now i just remember that the audio has "paused" attribute...
Also, you may want to check this topic HTML5 check if audio is playing?
i jus find it five seconds ago jaja

HTML5 video full preload in javascript

I have a high quality video which I cannot compress too much as it's going to be the base of a lot of image analysis whereby each frame will be redrawn into the canvas and then manipulated.
I'm trying to preload the whole thing before playing it as I can't have the video stop, buffer and continue. Is there an event which I can listen for which signifies that the whole video has preloaded before I commence playback?
Here's how I'm doing it in JS/jQuery:
this.canvas = this.el.find("canvas")[0];
this.video = this.el.find("video")[0];
this.ctx = this.canvas.getContext("2d");
this.video.autoplay = false;
this.video.addEventListener("play",this.draw)
this.video.addEventListener("timeupdate",this.draw)
this.video.addeventlistener("ended",this.trigger("complete",this))
This will load the entire video in JavaScript
var r = new XMLHttpRequest();
r.onload = function() {
myVid.src = URL.createObjectURL(r.response);
myVid.play();
};
if (myVid.canPlayType('video/mp4;codecs="avc1.42E01E, mp4a.40.2"')) {
r.open("GET", "slide.mp4");
}
else {
r.open("GET", "slide.webm");
}
r.responseType = "blob";
r.send();
canplaythrough is the event that should fire when enough data has downloaded to play without buffering.
From the Opera teams excellent (although maybe very slightly dated now) resource Everything you need to know about HTML5 video and audio
If the load is successful, whether using the src attribute or using source elements, then as data is being downloaded, progress events are fired. When enough data has been loaded to determine the video's dimensions and duration, a loadedmetadata event is fired. When enough data has been loaded to render a frame, the loadeddata event is fired. When enugh data has been loaded to be able to play a little bit of the video, a canplay event is fired. When the browser determines that it can play through the whole video without stopping for downloading more data, a canplaythrough event is fired; this is also when the video starts playing if it has a autoplay attribute.
'canplaythrough' support matrix available here: https://caniuse.com/mdn-api_htmlmediaelement_canplaythrough_event
You can get around the support limitations by binding the load element to the same function, as it will trigger on those.
Download the video using fetch
Convert the response to a blob
Create an object URL from the blob (e.g. blob:http://localhost:8080/df3c4336-2d9f-4ba9-9714-2e9e6b2b8888)
async function preloadVideo(src) {
const res = await fetch(src);
const blob = await res.blob();
return URL.createObjectURL(blob);
}
Usage:
const video = document.createElement("video");
video.src = await preloadVideo("https://example.com/video.mp4");
Hope this could help you
var xhrReq = new XMLHttpRequest();
xhrReq.open('GET', 'yourVideoSrc', true);
xhrReq.responseType = 'blob';
xhrReq.onload = function() {
if (this.status === 200) {
var vid = URL.createObjectURL(this.response);
video.src = vid;
}
}
xhrReq.onerror = function() {
console.log('err' ,arguments);
}
xhrReq.onprogress = function(e){
if(e.lengthComputable) {
var percentComplete = ((e.loaded/e.total)*100|0) + '%';
console.log('progress: ', percentComplete);
}
}
xhrReq.send();
and then , if your video src has another domain ,you have to handle CORS .
So far the most trustable solution we found was to play it and wait for the buffer to be fully loaded.
Which means if the video is long, you will have to wait for almost all the video length.
That isn't cool, i know.
Wondering if someone has figured out some other magically reliable way of doing it ( ideally using something like PreloadJS which automatically falls back to flash when HTML5 video isn't supported ).
You can use this nice plugin:
https://github.com/GianlucaGuarini/jquery.html5loader
In its API there is a onComplete event that is triggered when the plugin finishes to load all the sources
Does this work?
video.onloadeddata = function(){
video.onseeked = function(){
if(video.seekable.end(0) >= video.duration-0.1){
alert("Video is all loaded!");
} else {
video.currentTime=video.buffered.end(0); // Seek ahead to force more buffering
}
};
video.currentTime=0; // first seek to trigger the event
};

Categories