Attach an image to a moving element in a video using canvas - javascript

How can I create a video using HTML5 canvas where I can attach a persons face on a moving element in a video using chroma key. So that the face moves along with that element.
Came across this website JibJab which makes use of this.
Just want to know how this can be done using HTML5 canvas.
As per my research we can place a video on a video or even a static image (http://www.xindustry.com/html5greenscreen/). But how to move a static image is what I'm looking for.
Also found this solution but it will create transparent element in the video and not attach an image to it.

Related

How to have multiple HTML5 canvas with previous, next and add icon

I am making a whiteboard based on the html5 canvas and have added many features like pencil, eraser, upload image, upload pdf and so on. Could anyone help me with how to have multiple canvas which can be accessed by the previous and next buttons. I also need an add button for addition of a canvas.I have already built the buttons in html and css and need help in the javascript.
This is a picture of the next and previous whiteboard buttons; and here id a picture of the add and previous whiteboard buttons. When the page is in the last whiteboard the add icon should be shown. I think there should also be a whiteboard limit which would help conserve the browser memory.
instead of adding multiple canvas you could use only on canvas,
on add canvas button , save this state of canvas and clear it
on pressing previous save this state of canvas and load previous state( and same for next)
in this way you won't have to worry about creating too many canvas and run into memory leak problem also
so basically you will have one canvas and array of state which you will load based on the number

Is it possible to record a video of the contents of an HTML element?

So I want to use javascript and css to create an animated video of maps. I imagined to do some animations using leaflet etc and use some javascript to record a video of it.
However so far I found that video recording is only possible for the entire screen or a canvas according to MDN (https://developer.mozilla.org/en-US/docs/Web/API/MediaStream_Recording_API/Recording_a_media_element) and Google documentation (https://developers.google.com/web/updates/2016/10/capture-stream)
I tried to render leaflet js onto a canvas, but I could not find any way to do so.
So is there a way to create an animated video using javascript? Either by rendering leaflet onto a canvas or by recording the content of a element?
You can use RecordRTC to record a canvas element.
If you already have a canvas that has animations in it, it is very simple to use RecordRTC to record that canvas element.
If you don't have a canvas element and you need to record an HTML element, you would first need to place a hidden canvas on the screen and draw on it your HTML content using window.requestAnimationFrame(). Then you can use RecordRTC to record that canvas element.
You can use RecordRTC's CanvasRecorder to record just the canvas.
But if you want to do other things like record the user's microphone or browser tab's audio alongside your recording your canvas, you can canvas.captureStream() and use the RecordRTC's MediaStreamRecorder to record.
In both the aforementioned cases, you will get a video file as the result.
A full example demo can be found here -> https://www.webrtc-experiment.com/RecordRTC/Canvas-Recording/Canvas-Animation-Recording-Plus-Microphone.html
The source code for that demo can be found here -> https://github.com/muaz-khan/RecordRTC/blob/master/Canvas-Recording/Canvas-Animation-Recording-Plus-Microphone.html
All credits go to Muaz Khan for maintaining such an awesome project alongside many other equally awesome webRTC projects.
NOTE: Rendering HTML element to canvas can be CPU intensive.

Send flash video stream to JavaScript

I want to make a script that will help user to center his face in the capture preview. Some <div> that would keep his outline green if the face is kept in a defined area and red if the face goes out.
For capture, I use Webcam.js a library with Flash fail-back.
For tracking, I'm using Tracking.JS that can handle object detection from a video element.
The goal is to make a real time face positioning plugin.
Different preview modes of Webcam.js :
The HTML5 preview :
A video element created by WEBCAM.JS that has for src attribute the video stream of getUserMedia function. (The video is nice handled by TRACKING.JS)
The Flash preview (IE Fail-back):
A .swf object (webcam.swf) whos prompts access of camera, displays the preview -IN THE FLASH MOTOR- and when the user snaps, it send back the picture to Javascript.
HERE IS MY PROBLEM:
Is there a way to build/edit swf file to make it send the video stream to javascript ? (Instead of only the picture when the snap is fired) The goal is to TRACKING.JS as well the HTML5 stream than the flash one in case of.
Webcam.swf > Here is Github directory of Actionscript files.
Thank you.

Using Paper.js to highlight a region of interest in a youtube video

I'm trying to use Paper.js to build a tool that allows an user to select a region of interest within a video in their browser. This example reflects what I'm trying to accomplish: http://paperjs.org/examples/hit-testing/
Picture the user being able to create a blob around a portion of a youtube video to highlight a person. I then plan to use the coordinates from the points of the blob for some computer vision processing based on the interest of the user.
I think Paper.js is a great tool for this purpose, however, I'm having a hard time embedding a Youtube video inside a canvas element so that I can actually use Paper.js to build the tool. It's been surprisingly hard to find information on how I can accomplish this - this stackoverflow question provides an answer, but the youtube video must be downloaded rather than simply linked through its URL: Youtube video Inside canvas
Am I approaching this task correctly? Can anyone think of a way to accomplish this? Thanks!
I could not figure how to embed the youtube video inside the canvas, but I managed to do exactly what I wanted placing the youtube video behind the Canvas element! It turns out that a canvas is transparent, so all it took was some CSS to accomplish what I wanted. However, this disables the controls of the video, you will need to write some javascript for that to work. The following link explains how to place a video behind a canvas: https://developer.apple.com/library/safari/documentation/AudioVideo/Conceptual/HTML-canvas-guide/PuttingVideoonCanvas/PuttingVideoonCanvas.html

DICOM viewer - highlight a specific area

I am using papaya to view a DICOM image. I want to highlight a specific region of the image when a user drags over a region.
I was trying to use Jcrop plugin, but it does not seem to work. Is there a fix in doing that? Or does the papaya viewer have a built in function to capture the drag event?
I'm guessing JCrop is designed to work with <img> not <canvas>, but maybe this will be helpful: Cropping image drawn into canvas with JCrop
In Papaya, take a look at these in viewer.js:
papaya.viewer.Viewer.prototype.mouseDownEvent
papaya.viewer.Viewer.prototype.mouseMoveEvent (see isDragging)
papaya.viewer.Viewer.prototype.mouseUpEvent
// these might be useful to set a command-key to enable your feature
papaya.viewer.Viewer.prototype.keyDownEvent
papaya.viewer.Viewer.prototype.keyUpEvent
Note, there are already several features that listen to mouse moving and dragging, so you'd have to work around those (maybe add a new command-key for your action) or disable the existing behavior.

Categories