webrtc speed issues when i capture screen - javascript

am developing a 4 peers webrtc video chat!
everything is fine at this point , so i add a screen sharing future to the website!
when ever i press screenshare , the connection becomes so slow ! i thought it's because 4 peers connection , but this happens only when i share my screen .
i tried to use RemoveStream function that sends the camera stream, but the streams still lagging .
this is the function that runs after i press screenshare button
async function startCapture() {
var audioStream = await navigator.mediaDevices.getUserMedia({audio:true});
var audioTrack = audioStream.getAudioTracks()[0];
let captureStream = null;
try {
captureStream = await navigator.mediaDevices.getDisplayMedia(gdmOptions);
captureStream.addTrack( audioTrack );
} catch(err) {
console.error("Error: " + err);
}
// return captureStream;
if(rtcPeerConn){
rtcPeerConn.removeStream(myStream);
rtcPeerConn.addStream(captureStream);
}
if(rtcPeerConn1){
rtcPeerConn1.removeStream(myStream);
rtcPeerConn1.addStream(captureStream);
}
if(rtcPeerConn2){
rtcPeerConn2.removeStream(myStream);
rtcPeerConn2.addStream(captureStream);
}
if(rtcPeerConn3){
rtcPeerConn3.removeStream(myStream);
rtcPeerConn3.addStream(captureStream);
}
myStream.getTracks().forEach(function(track) {
track.stop();
});
myStream = captureStream;
success(myStream);
}
i even tried to remove tracks from the first stream like this
async function startCapture() {
myStream.getTracks().forEach(function(track) {
track.stop();
});
var audioStream = await navigator.mediaDevices.getUserMedia({audio:true});
var audioTrack = audioStream.getAudioTracks()[0];
let captureStream = null;
try {
captureStream = await navigator.mediaDevices.getDisplayMedia(gdmOptions);
captureStream.addTrack( audioTrack );
} catch(err) {
console.error("Error: " + err);
}
if(rtcPeerConn){
rtcPeerConn.removeStream(myStream);
rtcPeerConn.addStream(captureStream);
}
if(rtcPeerConn1){
rtcPeerConn1.removeStream(myStream);
rtcPeerConn1.addStream(captureStream);
}
if(rtcPeerConn2){
rtcPeerConn2.removeStream(myStream);
rtcPeerConn2.addStream(captureStream);
}
if(rtcPeerConn3){
rtcPeerConn3.removeStream(myStream);
rtcPeerConn3.addStream(captureStream);
}
myStream = captureStream;
success(myStream);
}
as you see i used removeStream function to avoid sending useless streams , but still nothing changed.

What are the constraints you are placing on getDisplayMedia? Perhaps you are sending "too much" video content, and thus slowing everything down.
[edit]
According to your comment, you are recording audio from the screen, and also audio from the mic. Perhaps remove the audio track from the screen recording?
You can also use options to reduce the size of the video: (this requires using getUserMedia instead of getDisplayMedia)
video:{
width: { min: 100, ideal: width, max: 1920 },
height: { min: 100, ideal: height, max: 1080 },
frameRate: {ideal: framerate}
}
Perhaps a lower framerate? Try reducing the size and see if that helps too :)

Related

Record video in lower resolution than original using MediaRecorder

I'm creating a video recorder script using JavaScript and the MediaRecorder API. I'm using a video capture as source. The video output is 1920 x 1080 but I'm trying to shrink this resolution to 640 x 360 (360p).
I will write all the code below. I tried many configurations and variants of HTML and JS, and according to this site my video source can fit that size I'm trying to force.
The video source is from this elgato camlink 4k
UPDATE
Instead of using exact in video constraints, replace it with ideal and it will see if this resolution is available in the device.
The elgato camlink device don't support 360p apparently, I tested with external webcam which does support 360p and using ideal it works.
Using windows camera settings you can see there is no other resolutions available on elgato camlink only HD and FHD.
The HTML tag:
<video id="videoEl" width="640" height="360" autoplay canplay resize></video>
This is the getUSerMedia() script:
const video = document.getElementById('videoEl');
const constraints = {
audio: { deviceId: audioDeviceId },
video: {
deviceId: videoDeviceId,
width: { exact: 640 },
height: { exact: 360 },
frameRate: 30
}
};
this.CameraStream = await navigator.mediaDevices.getUserMedia(constraints);
video.srcObject = this.CameraStream;
Before that I choose the video source using navigator.mediaDevices.enumerateDevices();
Then I tried some options for the MediaRecorder constructor:
this.MediaRecorder = new MediaRecorder(this.CameraStream)
this.MediaRecorder = new MediaRecorder(this.CameraStream, { mimeType: 'video/webm' })
Found this mimeType in this Forum
this.MediaRecorder = new MediaRecorder(this.CameraStream, { mimeType: 'video/x-matroska;codecs=h264' })
And the event listener
this.MediaRecorder.addEventListener('dataavailable', event => {
this.BlobsRecorded.push(event.data);
});
MediaRecorder on stop
As I mention before, I tried some variants of options:
const options = { type: 'video/x-matroska;codecs=h264' };
const options = { type: 'video/webm' };
const options = { type: 'video/mp4' }; // not supported
const finalVideo = URL.createObjectURL(
new Blob(this.BlobsRecorded, options)
);
Note
Everything is working perfectly, I just leave the code to let you see the used constraints and for illustrative purposes. If there is something missing let me know to put it here.
Thank you for your time.

Can't clearTimer in async interval

I'm trying to modify an implementation of some TensorFlow face detection algorithms using Java.
At the moment, I've added a button that properly stops/starts the video streaming from my camera. Also, when the video is playing, I detect the faces on it every 100ms with an async interval.
The problem appears when I Stop and then restart the video streaming because multiple detections are generated. I'm assuming it's related to the interval considering that I print the detections and the interval var to console and there are more than one detections in the same interval and the interval doesn't reset to zero after clearInterval(DetTim) when the video gets paused.
My code is as follows (I'm omitting a load of models and the StartVideo function):
const video = document.getElementById('video')
const PlayButton = document.getElementById('play-button')
var DetTim = null
video.addEventListener('play', () => {
const canvas = faceapi.createCanvasFromMedia(video)
document.body.append(canvas)
const displaySize = { width: video.width, height: video.height }
faceapi.matchDimensions(canvas, displaySize)
if (!video.paused){
DetTim = setInterval(async () => {
console.log(DetTim)
const detections = await faceapi.detectAllFaces(video, new
faceapi.TinyFaceDetectorOptions())
const resizedDetections = faceapi.resizeResults(detections, displaySize)
canvas.getContext('2d').clearRect(0, 0, canvas.width, canvas.height)
faceapi.draw.drawDetections(canvas, resizedDetections)
}, 100)
} else {
clearInterval(DetTim)
console.log(DetTim)
}
})
PlayButton.addEventListener("click", (e) =>{
if (video.paused) {
video.play()
e.target.textContent = '▌ ▌'
} else {
video.pause()
e.target.textContent = '▶'
}
})
Also, here are some screenshots related to the problem. In the first shot, the code works properly as it has just been initialized. In the second one, multiple detections (blue rectangles) have been drawn over the canvas after multiple starts/stops clicks.
First Shot
Second Shot
As CBroe mentioned in a comment:
I would either handle both in the click event, or both in the play and pause event. Mixing both, does not sound like a good idea.
I ended up handling the click event within the play event and it solved the issue.

How to play Multiple Videos in requestPictureInPicture?

requestPictureInPicture is so amazing, but it looks like it only works with 1 video.
How can I get requestPictureInPicture to play multiple videos, so I can watch two videos at the same time?
Basically this only displays one video:
video
.requestPictureInPicture()
.catch(error => {
console.log(error) // Error handling
});
video2
.requestPictureInPicture()
.catch(error => {
console.log(error) // Error handling
});
https://codepen.io/zecheesy/pen/YzwBJMR
Thoughts: Maybe we could put two videos in a canvas? And have the pictureInPicture play both videos at the same time? https://googlechrome.github.io/samples/picture-in-picture/audio-playlist
I'm not sure if this is possible. Would love your help so much!
Regarding opening two PictureInPitcure windows simultaneously, the specs have a paragraph just for it, where they explain they actually leave it as an implementation detail:
Operating systems with a Picture-in-Picture API usually restrict Picture-in-Picture mode to only one window. Whether only one window is allowed in Picture-in-Picture mode will be left to the implementation and the platform. However, because of the one Picture-in-Picture window limitation, the specification assumes that a given Document can only have one Picture-in-Picture window.
What happens when there is a Picture-in-Picture request while a window is already in Picture-in-Picture will be left as an implementation detail: the current Picture-in-Picture window could be closed, the Picture-in-Picture request could be rejected or even two Picture-in-Picture windows could be created. Regardless, the User Agent will have to fire the appropriate events in order to notify the website of the Picture-in-Picture status changes.
So the best we can say it that you should not expect it to open two windows simultaneously.
Now, if you really wish, you can indeed draw both videos on a canvas and pass this canvas to a PiP window, after piping its captureStream() to a third <video>, though this require that both videos are served with the proper Access-Control-Allow-Origin headers, and moreover, it requires your browser to actually support the PiP API (current Firefox has a PiP feature which is not the PiP API).
Here is a proof of concept:
const vids = document.querySelectorAll( "video" );
const btn = document.querySelector( "button" );
// wait for both video has their metadata
Promise.all( [ ...vids ].map( (vid) => {
return new Promise( (res) => vid.onloadedmetadata = () => res() );
} ) )
.then( () => {
if( !HTMLVideoElement.prototype.requestPictureInPicture ) {
return console.error( "Your browser doesn't support the PiP API" );
}
btn.onclick = async (evt) => {
const canvas = document.createElement( "canvas" );
// both videos share the same 16/9 ratio
// so in this case it's really easy to draw both on the same canvas
// to make it dynamic would require more maths
// but I'll let it to the readers
const height = 720;
const width = 1280;
canvas.height = height * 2; // vertical disposition
canvas.width = width;
const ctx = canvas.getContext( "2d" );
const video = document.createElement( "video" );
video.srcObject = canvas.captureStream();
let began = false; // rPiP needs video's metadata
anim();
await video.play();
began = true;
video.requestPictureInPicture();
function anim() {
ctx.drawImage( vids[ 0 ], 0, 0, width, height );
ctx.drawImage( vids[ 1 ], 0, height, width, height );
// iff we are still in PiP mode
if( !began || document.pictureInPictureElement === video ) {
requestAnimationFrame( anim );
}
else {
// kill the stream
video.srcObject.getTracks().forEach( track => track.stop() );
}
}
}
} );
video { width: 300px }
<button>enter Picture in Picture</button><br>
<video crossorigin muted controls autoplay loop
src="https://upload.wikimedia.org/wikipedia/commons/2/22/Volcano_Lava_Sample.webm"></video>
<video crossorigin muted controls autoplay loop
src="https://upload.wikimedia.org/wikipedia/commons/a/a4/BBH_gravitational_lensing_of_gw150914.webm"></video>
And beware, since I did mute the videos fo SO, scrolling in a way the original videos are out of sight will pause them.

How to resolve iOS 11 Safari getUserMedia "Invalid constraint" issue

I'm attempting to run the following code in Safari in iOS 11. It should prompt the user to give access to their devices camera and then display it in my <video autoplay id="video"></video> element. However, when running in iOS 11, it results in an OverconstrainedError to be thrown:
{message: "Invalid constraint", constraint: ""}
The code runs fine in Android and successfully opens the camera.
I've attempted multiple valid configurations with no luck.
I know iOS 11 just came out so it may be a bug, but any thoughts? Has anyone else run into this?
Code:
var video = document.getElementById('video');
if(navigator.mediaDevices && navigator.mediaDevices.getUserMedia) {
navigator.mediaDevices.getUserMedia({video: true})
.then(function(stream) {
video.src = window.URL.createObjectURL(stream);
video.play();
})
.catch(function(err) {
console.log(err);
});
}
Edit 1
I've run navigator.mediaDevices.getSupportedConstraints() and it returns the following:
{
aspectRatio: true,
deviceid: true,
echoCancellation: false,
facingMode: true,
frameRate: true,
groupId: true,
height: true,
sampleRate: false,
sampleSize: false,
volume: true,
width: true
}
I've tried configurations omitting the video property, but had no luck.
The invalid constraint error in safari is because the browser expects that you pass a correct width, one of:
320
640
1280
the height is auto calculate in an aspect ratio of 4:3 for 320 or 640, and 16:9 for 1280, then if you pass a width of 320, you video stream is set in:
320x240
if you set a width of 640, you video stream is set in:
640x480
And if you set a width of 1280, then you video stream is set in:
1280x720
In any other case you got a error "InvalidConstrain" for width value.
Also you can use a min, max, exact or ideal constrains for width, please check the MDN documentation
Here an example in this codepen
var config = { video: { width: 320/*320-640-1280*/ } };
var start = () => navigator.mediaDevices.getUserMedia(config)
.then(stream => v.srcObject = stream)
.then(() => new Promise(resolve => v.onloadedmetadata = resolve))
.then(() => log("Success: " + v.videoWidth + "x" + v.videoHeight))
.catch(log);
var log = msg => div.innerHTML += "<p>" + msg + "</p>";
PD: In chrome you can set a width of height and the video stream is set in these sizes, Firefox do a fitness distance, and Safari expect a exact match.
Remember that the iOS Simulator that comes with Xcode does not support webcam or microphone, which is why you may get the OverconstrainedError (as per https://developer.mozilla.org/en-US/docs/Web/API/MediaDevices/getUserMedia docs, that means no device fits the passed options, even if you're not putting specifics)
It appears to have been a bug that was corrected, because I just tried it again and the error message no longer appears.
Note that while the error message went away, I did have to make one more change for it to work, which was adding video.srcObject = stream; in the then callback.

facingMode in MediaDevices.getUserMedia seems not work in latest Android Chrome?(v53)

I have a function which can let user select the camera and show the captured video on the page [like this].
My code works before Android Google Chrome version 52, but don't know why it is broken now.
First, I check which devices I can use.
navigator.mediaDevices.enumerateDevices()
.then(function(devices) {
devices.forEach(function(device) {
console.log(device.kind + ": " + device.label +
" id = " + device.deviceId);
});
})
.catch(function(err) {
console.log(err.name + ": " + err.message);
});
and it returns two cameras(one front, one back) as I expect.
videoinput: camera 1, facing front id =
ef5f41259c805a533261c2d91c274fdbfa8a6c8d629231cb484845032f90e61a
cam:19 videoinput: camera 0, facing back id =
81448a117b2569ba9af905d01384b32179b9d32fe6a3fbabddf03868f36e4750
Then I follow the sample code, and below is what exactly I use:
<video id="video" autoplay></video>
<script>
var p = navigator.mediaDevices.getUserMedia({ video: {facingMode: "environment", width: 480, height: 800} });
p.then(function(mediaStream) {
var video = document.querySelector('video');
video.src = window.URL.createObjectURL(mediaStream);
video.onloadedmetadata = function(e) {
// Do something with the video here.
};
});
p.catch(function(err) { console.log(err.name); }); // always check for errors at the end.
</script>
Whether I set facingMode: "environment" or facingMode: {exact:"environment"}, it turns out using the front camera. Should I also report to Google?

Categories