I need to draw the stream from camera on top of a canvas using drawImage, however when I add the video element in react component it shows the element's stream in the page too. If I use display:'none' drawImage can't draw the user's webcam, it draws blank screen. Do I have to show html5 video element in the page or is there a way to play it without showing?
I am getting video stream from camera with getUserMedia:
async function getCameraStream() {
try {
const constraint = { video: true }
const stream = await navigator.mediaDevices.getUserMedia(constraint)
if (videoRef.current) {
videoRef.current.srcObject = stream
return
}
} catch (error) {
console.error('Error opening video camera.', error)
}
setLoading(false)
setCameraError(true)
}
And the video ref is:
return( <video
style={{display: 'none' }}
ref={videoRef}
className={classes.sourcePlayback}
src={sourceUrl}
hidden={isLoading}
autoPlay
playsInline
controls={false}
muted
loop
onLoadedData={handleVideoLoad}
/>)
I am using this videoRef in canvas like:
ctx.drawImage(sourcePlayback.htmlElement, 0, 0)
Here if the htmlElement is given with display:'none' it doesn't draw. I want to draw the htmlElement but not show it in the page.Is there a way to achieve this? I want html5VideoElement to play invisibly basically.display:none does not work.
EDIT:
Add css as visibility:'hidden' works for this one.
sourcePlayback: {
visibility: 'hidden',
display: 'flex',
width: 0,
height: 0,
},
I was able to replicate your issue in chrome with https://record.a.video.
When I used display: none on the video attribute, the video overlay on the canvas was black instead of my camera.
Workaround:
When I set the CSS on the video to width:1px, the video shows up on the canvas. I suppose it's technically on the screen, but it'll probably work for your purposes.
Related
Hi all I am using Signalwire's video calling functionality to make a video calling app. I am facing one issue here, as most of the times, we use video calling through phones or small screen sizes the height of the video is very small there.
Is there any way to increase the height of that div on which the video stream is getting injected?
Here What it looks like-
In mobile my video screen is quite small I want to increase the height.
I tried something like-
$scope.roomObject = new SignalWire.Video.RoomSession({
token: token,
rootElement: document.getElementById('root'), // an html element to display the video
audio: true,
video: {
width: { min: 720},
height: { min: 1280}
}
}
});
This does change the inner video into portrait mode but the issue remains, I can't increase the height.
Note- increasing the height of div not working I can increase the width though.
Thanks
It sounds like you're trying to extend the video canvas vertically to fill the entire screen. While you can change the aspect ratio of your video stream itself (which is how you're swapping your stream to portrait), you can't change the aspect ratio of the whole canvas.
NOTE : This is not duplicate as I didn't find any question related to take screenshot of video and canvas combined and I tried html2canvas
We have a div which internally contains video element and canvas. Video is for streaming and canvas is to draw any thing on that video. Now if I take a screenshot of div, it has to contain the video frame and the drawing. I didn't find any way to get this. I tried html2canvas but it is giving the screenshot of only canvas. Below is code.
HTML:
<div id="remoteScreen">
<video id="remoteVideo" autoplay></video>
<canvas id="remoteCanvas"></canvas>
</div>
<button id="captureButton">Capture</button>
CSS:
video {
max-width: 100%;
width: 320px;
}
canvas {
position: absolute;
background: transparent;
}
JS:
const captureBtn = document.querySelector("#captureButton");
captureBtn.addEventListener('click', captureCanvasVideoShot);
function captureCanvasVideoShot() {
html2canvas(document.querySelector("#remoteScreen")).then(function(canvas) {
document.body.appendChild(canvas);
});
By using html2canvas, I thought I will get combined screenshot of video and canvas as canvas is on top of video and both are part of remoteScreen div. But I got screenshot of canvas only. Can any one please let me know is there any way to get the screenshot of video + canvas combined or can I pass any additional configuration parameters to html2canvas to get the screenshot of video + canvas combined?
html2canvas cannot reproduce a video screenshot. It generates an image by reading a page's HTML elements and rendering a picture of them according to the CSS styling information. It doesn't actually take a screenshot.
I use cordova to build a hybrid signage application. I need to display a video on part of the screen when I try to display it then I don't see any video but sound. When I display video as fullscreen there is no problem. But I need to display it on part of the screen for example width: 400px height:400px top:100px left:100px
<video
id="videoContent_0" src="file:///storage/emulated/0/DS/MediaFiles/1004.mp4" autoplay="" style="z-index: 100; display: inline-block;">
</video>
Better solution to use cordova plugin then you can achieve wanted result.
Installation
cordova plugin add com.moust.cordova.videoplayer
Using: Just call the play method with a video file path as argument. The video player will close itself when the video will be completed.
VideoPlayer.play(
"file:///android_asset/www/movie.mp4",
{
volume: 0.5,
scalingMode: VideoPlayer.SCALING_MODE.SCALE_TO_FIT_WITH_CROPPING
},
function () {
console.log("video completed");
},
function (err) {
console.log(err);
}
);
I have a button that allows a user to preview their video that comes through their camera. The video stream is successfully displayed but I am struggling to find out how to alter the dimensions of the displayed video. This is what I have:
HTML:
<div id="local-media"></div>
JavaScript:
previewMedia = new Twilio.Conversations.LocalMedia();
Twilio.Conversations.getUserMedia().then(
function (mediaStream) {
previewMedia = new Twilio.Conversations.LocalMedia();
previewMedia.on('trackAdded', function (track) {
if(track.kind === "video"){
track.dimensions.height = 1200;
track.on('started', function (track) { // DOES NOT FIRE
console.log("Track started");
});
track.on('dimensionsChanged', function (videoTrack) { // DOES NOT FIRE
console.log("Track dimensions changed");
});
}
previewMedia.addStream(mediaStream);
previewMedia.attach('#local-media')
}),
function (error) {
console.error('Unable to access local media', error);
};
);
The trackAdded event fires but I don't get the started or dimensionsChanged events firing and setting the track.dimensions.height does not work.
I can shrink the video by using:
div#local-media {
width:270px;
height:202px;
}
div#local-media video {
max-width:100%;
max-height:100%;
}
but I cannot increase it beyond 640x375 pixels.
Based upon some interactions with our support team it seems you should first try setting the size of a <div> using CSS before attaching the video track. This technique is used in the quickstart application.
https://www.twilio.com/docs/api/video/guide/quickstart-js
Then, try passing in the optional localStreamConstraints when calling inviteToConversation
https://media.twiliocdn.com/sdk/js/conversations/releases/0.13.5/docs/Client.html#inviteToConversation
It looks like you can specify the dimensions for video:
https://developer.mozilla.org/en-US/docs/Web/API/MediaDevices/getUserMedia
which is then used by getUserMedia (the WebRTC function)
Keep in mind that you can adjust the capture size locally.This is the size of the Video Track being captured from the camera.
However, depending on network conditions, the WebRTC engine in your browser (and the receivers browser) may decide that the video resolution being captured is too high to send across the network at the desired frame rate (you can also set frame rate constraints on the capturer if you'd like to trade off temporal vs spatial resolution). This means that the receiving side may receive a video feed that is smaller than what you intended to send. To overcome this, you can use CSS to style the <video> element to ensure that it stays at a certain size, which will result in video upscaling/downscaling where required on the receiving side.
We plan to update our documentation with more of these specifics in the future. But you can always find additional support from help#twilio.com.
you can adjust the screensize using following css. you can find this css file in Quickstart->public->index.css
Remote Media Video Size
div#remote-media video
{
width: 50%;
height: 15%;
background-color: #272726;
background-repeat: no-repeat;
}
I'm trying to make a prototype for a photobooth-ish setup where the interface is being shown on a html page. I've managed to embed a video within a canvas element that basically uses the integrated/external webcam on my computer to show the user's face/body depending on the distance from the screen.
Problem: What I need is to be able to eliminate the background such that ONLY the person's face/body is visible and the rest is transparent. I need this so that the div tag housing this could be overlayed on top of a background such that it appears if the person standing in front of the device is standing in a different background setting (space,mountains,castles etc. as illustrated on the UI) than where they actually are in the room. How can I use some image processing code within this and how can I achieve this effect?
The code I'm working with so far:
<div id=outerdiv>
<video id="video" autoplay></video>
<canvas id="canvas" >
<script>
// Put event listeners into place
window.addEventListener("DOMContentLoaded", function() {
// Grab elements, create settings, etc.
var canvas = document.getElementById("canvas"),
context = canvas.getContext("2d"),
video = document.getElementById("video"),
videoObj = { "video": true },
errBack = function(error) {
console.log("Video capture error: ", error.code);
};
// Put video listeners into place
if(navigator.getUserMedia) { // Standard
navigator.getUserMedia(videoObj, function(stream) {
video.src = stream;
video.play();
}, errBack);
}
else if(navigator.webkitGetUserMedia) { // WebKit-prefixed
navigator.webkitGetUserMedia(videoObj, function(stream){
video.src = window.webkitURL.createObjectURL(stream);
video.play();
}, errBack);
}
}, false);
</script>
</canvas>
</div>
The effect would look something like this (the image is off the internet and the idea is to be able to detect the person, eliminate the background, replace the black area with a transparent region - all in a live video feed being captured from the webcam):
I don't know if you've ever used the "photobooth" app on mac os, but they do a similar operation. However they ask the user to first get out of the scene, this way the program gets a true background image, think of it like a calibration step. Then afterwards it can do a true background subtraction. this could really simplify your problem as opposed to doing a frame by frame background subtractions. where you look for differences between subsequent frames, this is much more difficult.
so if you can do "offline calibration" try that