I am building an web application for my experiment purpose. The aim here is to capture ~15-20 frames per second from the webcam and send it to the server. Once the frame is captured, it is converted to base64 and added to an array. After certain time, it is sent back to the server. Currently I am using imageCapture.takePhoto() to achieve this functionality. I get blob as a result which is then changed to base64. The application runs for ~5 seconds and during this time, frames are captured and sent to the server.
What are the more efficient ways to capture the frames through webcam to achieve this?
You can capture still images directly from the <video> element used to preview the stream from .getUserMedia(). You set up that preview, of course, by doing this sort of thing (pseudocode).
const stream = await navigator.getUserMedia(options)
const videoElement = document.querySelector('video#whateverId')
videoElement.srcObject = stream
videoElement.play()
Next, make yourself a canvas object and a context for it. It doesn't have to be visible.
const scratchCanvas = document.createElement('canvas')
scratchCanvas.width = video.videoWidth
scratchCanvas.height = video.videoHeight
const scratchContext = scratchCanvas.getContext('2d')
Now you can make yourself a function like this.
function stillCapture(video, canvas, context) {
context.drawImage( video, 0, 0, video.videoWidth, video.videoHeight)
canvas.toBlob(
function (jpegBlob) {
/* do something useful with the Blob containing jpeg */
}, 'image/jpeg')
}
A Blob containing a jpeg version of a still capture shows up in the callback. Do with it whatever you need to do.
Then, invoke that function every so often. For example, to get approximately 15fps, do this.
const howOften = 1000.0 / 15.0
setInterval (stillCapture, howOften, videoElement, scratchCanvas, scratchContext)
All this saves you the extra work of using .takePhoto().
Related
I made changes to an audio buffer like gain and panning, connected them to an audio context.
Now I want to save to a file with all the implemented changes.
Saving the buffer as is would give me the original audio without the changes.
Any idea of a method or a procedure existed to do that?
On way is to use a MediaRecorder to save the modified audio.
So, in addition to connecting to the destination, connect to a MediaStreamDestinationNode. This node has a stream object that you can use to initialize a MediaRecorder. Set up the recorder to save the data when data is available. When you're down recording, you have a blob that you can then download.
Many details are missing here, but you can find out how to use a MediaRecorder using the MDN example.
I found a solution, with OfflineAudioContext.
Here is an example with adding a gain change to my audio and saving it.
On the last line of the code I get the array buffer with the changes I made.
From there, I can go on saving the file.
let offlineCtx = new OfflineAudioContext(this.bufferNode.buffer.numberOfChannels, this.bufferNode.buffer.length, this.bufferNode.buffer.sampleRate);
let obs = offlineCtx.createBufferSource();
obs.buffer = this.buffer;
let gain = offlineCtx.createGain();
gain.gain.value = this.gain.gain.value;
obs.connect(gain).connect(offlineCtx.destination);
obs.start();
let obsRES = this.ctx.createBufferSource();
await offlineCtx.startRendering().then(r => {
obsRES.buffer = r;
});
I am currently using webcam (not native camera) on a web page to take a photo on users' mobile phone. Like this:
var video: HTMLVideoElement;
...
var context = canvas.getContext('2d');
context.drawImage(video, 0, 0, width, height);
var jpegData = canvas.toDataURL('image/jpeg', compression);
In such a way, I can now successfully generate a JPEG image data from web camera, and display it on the web page.
However, I found that the EXIF data is missing.
according to this:
Canvas.drawImage() will ignore all EXIF metadata in images,
including the Orientation. This behavior is especially troublesome
on iOS devices. You should detect the Orientation yourself and use
rotate() to make it right.
I would love the JPEG image contain the EXIF GPS data. Is there a simple way to include camera EXIF data during the process?
Thanks!
Tested on Pixel 3 - it works. Please note - sometimes it does not work with some desktop web-cameras. you will need exif-js to get the EXIF object from example.
const stream = await navigator.mediaDevices.getUserMedia({ video : true });
const track = stream.getVideoTracks()[0];
let imageCapture = new ImageCapture(track);
imageCapture.takePhoto().then((blob) => {
const newFile = new File([blob], "MyJPEG.jpg", { type: "image/jpeg" });
EXIF.getData(newFile, function () {
const make = EXIF.getAllTags(newFile);
console.log("All data", make);
});
});
unfortunately there's no way to extract exif from canvas.
Although, if you have access to jpeg, you can extract exif from that. For that I'd recommend exifr instead of widely popular exif-js because exif-js has been unmaintained for two years and still has breaking bugs in it (n is undefined).
With exifr you can either parse everything
exifr.parse('./myimage.jpg').then(output => {
console.log('Camera:', output.Make, output.Model))
})
or just a few tags
let output = await exifr.parse(file, ['ISO', 'Orientation', 'LensModel'])
First of, according to what I found so far, there is no way to include exif data during canvas context drawing.
Second, there is a way to work around, which is to extract the exif data from the original JPEG file, then after canvas context drawing, put the extracted exif data back into the newly drawn JPEG file.
It's messy and a little hacky, but for now this is the work around.
Thanks!
I have an array of Blobs (binary data, really -- I can express it however is most efficient. I'm using Blobs for now but maybe a Uint8Array or something would be better). Each Blob contains 1 second of audio/video data. Every second a new Blob is generated and appended to my array. So the code roughly looks like so:
var arrayOfBlobs = [];
setInterval(function() {
arrayOfBlobs.append(nextChunk());
}, 1000);
My goal is to stream this audio/video data to an HTML5 element. I know that a Blob URL can be generated and played like so:
var src = URL.createObjectURL(arrayOfBlobs[0]);
var video = document.getElementsByTagName("video")[0];
video.src = src;
Of course this only plays the first 1 second of video. I also assume I can trivially concatenate all of the Blobs currently in my array somehow to play more than one second:
// Something like this (untested)
var concatenatedBlob = new Blob(arrayOfBlobs);
var src = ...
However this will still eventually run out of data. As Blobs are immutable, I don't know how to keep appending data as it's received.
I'm certain this should be possible because YouTube and many other video streaming services utilize Blob URLs for video playback. How do they do it?
Solution
After some significant Googling I managed to find the missing piece to the puzzle: MediaSource
Effectively the process goes like this:
Create a MediaSource
Create an object URL from the MediaSource
Set the video's src to the object URL
On the sourceopen event, create a SourceBuffer
Use SourceBuffer.appendBuffer() to add all of your chunks to the video
This way you can keep adding new bits of video without changing the object URL.
Caveats
The SourceBuffer object is very picky about codecs. These have to be declared, and must be exact, or it won't work
You can only append one blob of video data to the SourceBuffer at a time, and you can't append a second blob until the first one has finished (asynchronously) processing
If you append too much data to the SourceBuffer without calling .remove() then you'll eventually run out of RAM and the video will stop playing. I hit this limit around 1 hour on my laptop
Example Code
Depending on your setup, some of this may be unnecessary (particularly the part where we build a queue of video data before we have a SourceBuffer then slowly append our queue using updateend). If you are able to wait until the SourceBuffer has been created to start grabbing video data, your code will look much nicer.
<html>
<head>
</head>
<body>
<video id="video"></video>
<script>
// As before, I'm regularly grabbing blobs of video data
// The implementation of "nextChunk" could be various things:
// - reading from a MediaRecorder
// - reading from an XMLHttpRequest
// - reading from a local webcam
// - generating the files on the fly in JavaScript
// - etc
var arrayOfBlobs = [];
setInterval(function() {
arrayOfBlobs.append(nextChunk());
// NEW: Try to flush our queue of video data to the video element
appendToSourceBuffer();
}, 1000);
// 1. Create a `MediaSource`
var mediaSource = new MediaSource();
// 2. Create an object URL from the `MediaSource`
var url = URL.createObjectURL(mediaSource);
// 3. Set the video's `src` to the object URL
var video = document.getElementById("video");
video.src = url;
// 4. On the `sourceopen` event, create a `SourceBuffer`
var sourceBuffer = null;
mediaSource.addEventListener("sourceopen", function()
{
// NOTE: Browsers are VERY picky about the codec being EXACTLY
// right here. Make sure you know which codecs you're using!
sourceBuffer = mediaSource.addSourceBuffer("video/webm; codecs=\"opus,vp8\"");
// If we requested any video data prior to setting up the SourceBuffer,
// we want to make sure we only append one blob at a time
sourceBuffer.addEventListener("updateend", appendToSourceBuffer);
});
// 5. Use `SourceBuffer.appendBuffer()` to add all of your chunks to the video
function appendToSourceBuffer()
{
if (
mediaSource.readyState === "open" &&
sourceBuffer &&
sourceBuffer.updating === false
)
{
sourceBuffer.appendBuffer(arrayOfBlobs.shift());
}
// Limit the total buffer size to 20 minutes
// This way we don't run out of RAM
if (
video.buffered.length &&
video.buffered.end(0) - video.buffered.start(0) > 1200
)
{
sourceBuffer.remove(0, video.buffered.end(0) - 1200)
}
}
</script>
</body>
</html>
As an added bonus this automatically gives you DVR functionality for live streams, because you're retaining 20 minutes of video data in your buffer (you can seek by simply using video.currentTime = ...)
Adding to the previous answer...
make sure to add sourceBuffer.mode = 'sequence' in the MediaSource.onopen event handler to ensure the data is appended based on the order it is received. The default value is segments, which buffers until the next 'expected' timeframe is loaded.
Additionally, make sure that you are not sending any packets with a data.size === 0, and make sure that there is 'stack' by clearing the stack on the broadcasting side, unless you are wanting to record it as an entire video, in which case just make sure the size of the broadcast video is small enough, and that your internet speed is fast. The smaller and lower the resolution the more likely you can keep a realtime connection with a client, ie a video call.
For iOS the broadcast needs to made from a iOS/macOS application, and be in mp4 format. The video chunk gets saved to the app's cache and then removed once it is sent to the server. A client can connect to the stream using either a web browser or app across nearly any device.
I want to play a video in HTML video element and take snapshot on pause. The snapshot is displayed on the page inside a canvas. Now I want the same snap to appear on another page and for this I am trying to encode the snapshot in base 64 using toDataUrl() method & pass it through URL.
But the maximum length of URL can be 2048 char while the output of toDataUrl is much bigger. How to proceed?
Working fine:
video.addEventListener('pause', function(){
$(this).hide();
$("#canvas1").show();
draw( video, thecanvas, img);
}, false);
function draw(video,thecanvas,img){
var context = thecanvas.getContext('2d');
context.drawImage(video,0,0,thecanvas.width,thecanvas.height);
var dataURL = thecanvas.toDataURL('image/jpeg',.1);
img.setAttribute('src',dataURL);
}
Not working: The function to direct to another page
function toskuentry(){
var imgsrc = $('#thumbnail_img').attr('src');
window.location.href = "sku_entry.php?imgsrc="+imgsrc;
}
Don't pass it through the URL, use HTML5 Web Storage. You can use either sessionStorage or localStorage:
function toskuentry(){
localStorage.setItem("img", $('#thumbnail_img').attr('src'));
}
On the next page you can access it by localStorage.getItem("img");.
One good option would be to use Firebase. They have an example of doing that.
https://www.firebase.com/tutorial/#session/n24v8lvnltc
Firebase uses local storage when offline, so that means you could also use local storage if you didn't want to use Firebase.
something like
localStorage["data"] = dataURL;
//...other page
var dataURL = JSON.parse(localStorage["data"]);
I need to take HTML5 canvas output as video or swf png sequence.
I found the following link on stackoverflow for capturing images.
Capture HTML Canvas as gif/jpg/png/pdf?
But can anyone suggest if we want the output to be video or swf of png sequence?
EDIT:
Ok now I understood how to capture the canvas data to store on server, I tried it and it is working fine if I use only shapes, rectangle or some other graphic, but not if I draw external images on canvas element.
Can anyone tell me how to capture canvas data completely whether we use graphic or external images for drawing on canvas?
I used the following code:
var cnv = document.getElementById("myCanvas");
var ctx = cnv.getContext("2d");
if(ctx)
{
var img = new Image();
ctx.fillStyle = "rgba(255,0,0,0.5)";
ctx.fillRect(0,0,300,300);
ctx.fill();
img.onload = function()
{
ctx.drawImage(img, 0,0);
}
img.src = "my external image path";
var data = cnv.toDataURL("image/png");
}
after taking the canvas data into my "data" variable I created a new canvas and draw the captured data on that, the red rectangle drawn on the second canvas but that external image doesn't.
Thanks in advance.
I would suggest:
Use setInterval to capture the contents of your Canvas as a PNG data URL.
function PNGSequence( canvas ){
this.canvas = canvas;
this.sequence = [];
};
PNGSequence.prototype.capture = function( fps ){
var cap = this;
this.sequence.length=0;
this.timer = setInterval(function(){
cap.sequence.push( cap.canvas.toDataURL() );
},1000/fps);
};
PNGSequence.prototype.stop = function(){
if (this.timer) clearInterval(this.timer);
delete this.timer;
return this.sequence;
};
var myCanvas = document.getElementById('my-canvas-id');
var recorder = new PNGSequence( myCanvas );
recorder.capture(15);
// Record 5 seconds
setTimeout(function(){
var thePNGDataURLs = recorder.stop();
}, 5000 );
Send all these PNG DataURLs to your server. It'll be a very large pile of data.
Using whatever server-side language you like (PHP, Ruby, Python) strip the headers from the data URLs so that you are left with just the base64 encoded PNGs
Using whatever server-side language you like, convert the base64 data to binary and write out temporary files.
Using whatever 3rd party library you like on the server, convert the sequence of PNG files to a video.
Edit: Regarding your comment of external images, you cannot create a data URL from a canvas that is not origin-clean. As soon as you use drawImage() with an external image, your canvas is tainted. From that link:
All canvas elements must start with their origin-clean set to true.
The flag must be set to false if any of the following actions occur:
[...]
The element's 2D context's drawImage() method is called with an HTMLImageElement or an HTMLVideoElement whose origin is not the same as that of the Document object that owns the canvas element.
[...]
Whenever the toDataURL() method of a canvas element whose origin-clean flag is set to false is called, the method must raise a SECURITY_ERR exception.
Whenever the getImageData() method of the 2D context of a canvas element whose origin-clean flag is set to false is called with otherwise correct arguments, the method must raise a SECURITY_ERR exception.
To start out, you want to capture the pixel data from the canvas on a regular interval (using JavaScript timers probably). You can do this by calling context.getImageData on the canvas's context. That will create a series of images that you can turn into a video stream.
http://www.whatwg.org/specs/web-apps/current-work/multipage/the-canvas-element.html#pixel-manipulation