How to resolve iOS 11 Safari getUserMedia "Invalid constraint" issue - javascript

I'm attempting to run the following code in Safari in iOS 11. It should prompt the user to give access to their devices camera and then display it in my <video autoplay id="video"></video> element. However, when running in iOS 11, it results in an OverconstrainedError to be thrown:
{message: "Invalid constraint", constraint: ""}
The code runs fine in Android and successfully opens the camera.
I've attempted multiple valid configurations with no luck.
I know iOS 11 just came out so it may be a bug, but any thoughts? Has anyone else run into this?
Code:
var video = document.getElementById('video');
if(navigator.mediaDevices && navigator.mediaDevices.getUserMedia) {
navigator.mediaDevices.getUserMedia({video: true})
.then(function(stream) {
video.src = window.URL.createObjectURL(stream);
video.play();
})
.catch(function(err) {
console.log(err);
});
}
Edit 1
I've run navigator.mediaDevices.getSupportedConstraints() and it returns the following:
{
aspectRatio: true,
deviceid: true,
echoCancellation: false,
facingMode: true,
frameRate: true,
groupId: true,
height: true,
sampleRate: false,
sampleSize: false,
volume: true,
width: true
}
I've tried configurations omitting the video property, but had no luck.

The invalid constraint error in safari is because the browser expects that you pass a correct width, one of:
320
640
1280
the height is auto calculate in an aspect ratio of 4:3 for 320 or 640, and 16:9 for 1280, then if you pass a width of 320, you video stream is set in:
320x240
if you set a width of 640, you video stream is set in:
640x480
And if you set a width of 1280, then you video stream is set in:
1280x720
In any other case you got a error "InvalidConstrain" for width value.
Also you can use a min, max, exact or ideal constrains for width, please check the MDN documentation
Here an example in this codepen
var config = { video: { width: 320/*320-640-1280*/ } };
var start = () => navigator.mediaDevices.getUserMedia(config)
.then(stream => v.srcObject = stream)
.then(() => new Promise(resolve => v.onloadedmetadata = resolve))
.then(() => log("Success: " + v.videoWidth + "x" + v.videoHeight))
.catch(log);
var log = msg => div.innerHTML += "<p>" + msg + "</p>";
PD: In chrome you can set a width of height and the video stream is set in these sizes, Firefox do a fitness distance, and Safari expect a exact match.

Remember that the iOS Simulator that comes with Xcode does not support webcam or microphone, which is why you may get the OverconstrainedError (as per https://developer.mozilla.org/en-US/docs/Web/API/MediaDevices/getUserMedia docs, that means no device fits the passed options, even if you're not putting specifics)

It appears to have been a bug that was corrected, because I just tried it again and the error message no longer appears.
Note that while the error message went away, I did have to make one more change for it to work, which was adding video.srcObject = stream; in the then callback.

Related

Zxing library problem configure zoom x0.5 on iphone 13

As part of a react web app, we use the Zxing library to perform barcode and qr code scans. However, we encounter a problem with the iphone 13 which sets the zoom to x1 by default, which results in a blurred image when we get closer to the elements to be scanned. We would like to configure the zoom to x0.5 (as is possible in the native iphone app), but I can't find a solution compatible with ios. If you have any ideas, I'm all ears.
Thanks in advance.
`
if(!navigator?.mediaDevices?.getUserMedia){
onError && onError('Cannot stream camera')
return
}
let userMediaStream: MediaStream
navigator.mediaDevices.getUserMedia({ audio: false, video: { facingMode: 'environment'}})
.then(stream => {
userMediaStream = stream
if(!videoRef?.current){
onError && onError('video ref missing')
return
}
videoRef.current.srcObject = stream
})
return () => {
if(userMediaStream) {
userMediaStream.getTracks().forEach(t => t.stop())
}
}
`
I've already tried listing the supportedConstraints:
`
const constraintList = new Array();
const supportedConstraints = navigator.mediaDevices.getSupportedConstraints();
for (const constraint of Object.keys(supportedConstraints)) {
constraintList.push(constraint);
}
console.log(constraintList);
`
But I get no element allowing me to modify the zoom or the focus: ['aspectRatio', 'deviceId', 'echoCancellation', 'facingMode', 'frameRate', 'groupId', 'height', 'sampleRate', ' sampleSize', 'volume', 'width']

Record video in lower resolution than original using MediaRecorder

I'm creating a video recorder script using JavaScript and the MediaRecorder API. I'm using a video capture as source. The video output is 1920 x 1080 but I'm trying to shrink this resolution to 640 x 360 (360p).
I will write all the code below. I tried many configurations and variants of HTML and JS, and according to this site my video source can fit that size I'm trying to force.
The video source is from this elgato camlink 4k
UPDATE
Instead of using exact in video constraints, replace it with ideal and it will see if this resolution is available in the device.
The elgato camlink device don't support 360p apparently, I tested with external webcam which does support 360p and using ideal it works.
Using windows camera settings you can see there is no other resolutions available on elgato camlink only HD and FHD.
The HTML tag:
<video id="videoEl" width="640" height="360" autoplay canplay resize></video>
This is the getUSerMedia() script:
const video = document.getElementById('videoEl');
const constraints = {
audio: { deviceId: audioDeviceId },
video: {
deviceId: videoDeviceId,
width: { exact: 640 },
height: { exact: 360 },
frameRate: 30
}
};
this.CameraStream = await navigator.mediaDevices.getUserMedia(constraints);
video.srcObject = this.CameraStream;
Before that I choose the video source using navigator.mediaDevices.enumerateDevices();
Then I tried some options for the MediaRecorder constructor:
this.MediaRecorder = new MediaRecorder(this.CameraStream)
this.MediaRecorder = new MediaRecorder(this.CameraStream, { mimeType: 'video/webm' })
Found this mimeType in this Forum
this.MediaRecorder = new MediaRecorder(this.CameraStream, { mimeType: 'video/x-matroska;codecs=h264' })
And the event listener
this.MediaRecorder.addEventListener('dataavailable', event => {
this.BlobsRecorded.push(event.data);
});
MediaRecorder on stop
As I mention before, I tried some variants of options:
const options = { type: 'video/x-matroska;codecs=h264' };
const options = { type: 'video/webm' };
const options = { type: 'video/mp4' }; // not supported
const finalVideo = URL.createObjectURL(
new Blob(this.BlobsRecorded, options)
);
Note
Everything is working perfectly, I just leave the code to let you see the used constraints and for illustrative purposes. If there is something missing let me know to put it here.
Thank you for your time.

How to play Multiple Videos in requestPictureInPicture?

requestPictureInPicture is so amazing, but it looks like it only works with 1 video.
How can I get requestPictureInPicture to play multiple videos, so I can watch two videos at the same time?
Basically this only displays one video:
video
.requestPictureInPicture()
.catch(error => {
console.log(error) // Error handling
});
video2
.requestPictureInPicture()
.catch(error => {
console.log(error) // Error handling
});
https://codepen.io/zecheesy/pen/YzwBJMR
Thoughts: Maybe we could put two videos in a canvas? And have the pictureInPicture play both videos at the same time? https://googlechrome.github.io/samples/picture-in-picture/audio-playlist
I'm not sure if this is possible. Would love your help so much!
Regarding opening two PictureInPitcure windows simultaneously, the specs have a paragraph just for it, where they explain they actually leave it as an implementation detail:
Operating systems with a Picture-in-Picture API usually restrict Picture-in-Picture mode to only one window. Whether only one window is allowed in Picture-in-Picture mode will be left to the implementation and the platform. However, because of the one Picture-in-Picture window limitation, the specification assumes that a given Document can only have one Picture-in-Picture window.
What happens when there is a Picture-in-Picture request while a window is already in Picture-in-Picture will be left as an implementation detail: the current Picture-in-Picture window could be closed, the Picture-in-Picture request could be rejected or even two Picture-in-Picture windows could be created. Regardless, the User Agent will have to fire the appropriate events in order to notify the website of the Picture-in-Picture status changes.
So the best we can say it that you should not expect it to open two windows simultaneously.
Now, if you really wish, you can indeed draw both videos on a canvas and pass this canvas to a PiP window, after piping its captureStream() to a third <video>, though this require that both videos are served with the proper Access-Control-Allow-Origin headers, and moreover, it requires your browser to actually support the PiP API (current Firefox has a PiP feature which is not the PiP API).
Here is a proof of concept:
const vids = document.querySelectorAll( "video" );
const btn = document.querySelector( "button" );
// wait for both video has their metadata
Promise.all( [ ...vids ].map( (vid) => {
return new Promise( (res) => vid.onloadedmetadata = () => res() );
} ) )
.then( () => {
if( !HTMLVideoElement.prototype.requestPictureInPicture ) {
return console.error( "Your browser doesn't support the PiP API" );
}
btn.onclick = async (evt) => {
const canvas = document.createElement( "canvas" );
// both videos share the same 16/9 ratio
// so in this case it's really easy to draw both on the same canvas
// to make it dynamic would require more maths
// but I'll let it to the readers
const height = 720;
const width = 1280;
canvas.height = height * 2; // vertical disposition
canvas.width = width;
const ctx = canvas.getContext( "2d" );
const video = document.createElement( "video" );
video.srcObject = canvas.captureStream();
let began = false; // rPiP needs video's metadata
anim();
await video.play();
began = true;
video.requestPictureInPicture();
function anim() {
ctx.drawImage( vids[ 0 ], 0, 0, width, height );
ctx.drawImage( vids[ 1 ], 0, height, width, height );
// iff we are still in PiP mode
if( !began || document.pictureInPictureElement === video ) {
requestAnimationFrame( anim );
}
else {
// kill the stream
video.srcObject.getTracks().forEach( track => track.stop() );
}
}
}
} );
video { width: 300px }
<button>enter Picture in Picture</button><br>
<video crossorigin muted controls autoplay loop
src="https://upload.wikimedia.org/wikipedia/commons/2/22/Volcano_Lava_Sample.webm"></video>
<video crossorigin muted controls autoplay loop
src="https://upload.wikimedia.org/wikipedia/commons/a/a4/BBH_gravitational_lensing_of_gw150914.webm"></video>
And beware, since I did mute the videos fo SO, scrolling in a way the original videos are out of sight will pause them.

webrtc speed issues when i capture screen

am developing a 4 peers webrtc video chat!
everything is fine at this point , so i add a screen sharing future to the website!
when ever i press screenshare , the connection becomes so slow ! i thought it's because 4 peers connection , but this happens only when i share my screen .
i tried to use RemoveStream function that sends the camera stream, but the streams still lagging .
this is the function that runs after i press screenshare button
async function startCapture() {
var audioStream = await navigator.mediaDevices.getUserMedia({audio:true});
var audioTrack = audioStream.getAudioTracks()[0];
let captureStream = null;
try {
captureStream = await navigator.mediaDevices.getDisplayMedia(gdmOptions);
captureStream.addTrack( audioTrack );
} catch(err) {
console.error("Error: " + err);
}
// return captureStream;
if(rtcPeerConn){
rtcPeerConn.removeStream(myStream);
rtcPeerConn.addStream(captureStream);
}
if(rtcPeerConn1){
rtcPeerConn1.removeStream(myStream);
rtcPeerConn1.addStream(captureStream);
}
if(rtcPeerConn2){
rtcPeerConn2.removeStream(myStream);
rtcPeerConn2.addStream(captureStream);
}
if(rtcPeerConn3){
rtcPeerConn3.removeStream(myStream);
rtcPeerConn3.addStream(captureStream);
}
myStream.getTracks().forEach(function(track) {
track.stop();
});
myStream = captureStream;
success(myStream);
}
i even tried to remove tracks from the first stream like this
async function startCapture() {
myStream.getTracks().forEach(function(track) {
track.stop();
});
var audioStream = await navigator.mediaDevices.getUserMedia({audio:true});
var audioTrack = audioStream.getAudioTracks()[0];
let captureStream = null;
try {
captureStream = await navigator.mediaDevices.getDisplayMedia(gdmOptions);
captureStream.addTrack( audioTrack );
} catch(err) {
console.error("Error: " + err);
}
if(rtcPeerConn){
rtcPeerConn.removeStream(myStream);
rtcPeerConn.addStream(captureStream);
}
if(rtcPeerConn1){
rtcPeerConn1.removeStream(myStream);
rtcPeerConn1.addStream(captureStream);
}
if(rtcPeerConn2){
rtcPeerConn2.removeStream(myStream);
rtcPeerConn2.addStream(captureStream);
}
if(rtcPeerConn3){
rtcPeerConn3.removeStream(myStream);
rtcPeerConn3.addStream(captureStream);
}
myStream = captureStream;
success(myStream);
}
as you see i used removeStream function to avoid sending useless streams , but still nothing changed.
What are the constraints you are placing on getDisplayMedia? Perhaps you are sending "too much" video content, and thus slowing everything down.
[edit]
According to your comment, you are recording audio from the screen, and also audio from the mic. Perhaps remove the audio track from the screen recording?
You can also use options to reduce the size of the video: (this requires using getUserMedia instead of getDisplayMedia)
video:{
width: { min: 100, ideal: width, max: 1920 },
height: { min: 100, ideal: height, max: 1080 },
frameRate: {ideal: framerate}
}
Perhaps a lower framerate? Try reducing the size and see if that helps too :)

getUserMedia not capturing the screen correctly

Using the following code:
function captureScreen(size) {
navigator.webkitGetUserMedia({
audio: false,
video: {
mandatory: {
chromeMediaSource: 'desktop',
minWidth: size.width,
maxWidth: size.width,
minHeight: size.height,
maxHeight: size.height,
minFrameRate: 1,
maxFrameRate: 1
}
}
}, gotStream, getUserMediaError);
function gotStream(stream) {
var video = document.createElement('video');
video.addEventListener('loadedmetadata',function(){
var canvas = document.createElement('canvas');
canvas.width = this.videoWidth;
canvas.height = this.videoHeight;
var context = canvas.getContext("2d");
context.drawImage(this, 0, 0);
},false);
video.src = URL.createObjectURL(stream);
}
function getUserMediaError(e) {
console.log('getUserMediaError: ' + JSON.stringify(e, null, '---'));
}
}
Gives me the following result:
Notice how the right side of the image is slightly wrapped to the left side? For some reason, this happens on my laptop (1366x768), a friend's laptop (1366x768), but not my desktop (3840x1080 dual screen). the size parameter passed to the function is always the correct and actual size of the entire desktop area. Even when I hardcode the min and max width / height, I get the same result. Is there any way to fix this? Am I doing something wrong?
I'm building an Electron app which needs to take a screenshot of the user's desktop. There's also nothing else on the web page, and I am using a reset CSS.

Categories